I0127 10:47:12.995329 8 e2e.go:224] Starting e2e run "5f97db3d-40f2-11ea-b652-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580122031 - Will randomize all specs Will run 201 of 2164 specs Jan 27 10:47:13.246: INFO: >>> kubeConfig: /root/.kube/config Jan 27 10:47:13.249: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 10:47:13.273: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 10:47:13.309: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 10:47:13.309: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 27 10:47:13.309: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 10:47:13.320: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 10:47:13.320: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 27 10:47:13.320: INFO: e2e test version: v1.13.12 Jan 27 10:47:13.321: INFO: kube-apiserver version: v1.13.8 SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:47:13.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 27 10:47:13.499: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-60976911-40f2-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 10:47:13.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-8pjt2" to be "success or failure" Jan 27 10:47:13.533: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.936608ms Jan 27 10:47:15.800: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282898971s Jan 27 10:47:17.824: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307170607s Jan 27 10:47:20.109: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591533991s Jan 27 10:47:22.125: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607664191s Jan 27 10:47:24.137: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.619449879s Jan 27 10:47:26.154: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.637099053s STEP: Saw pod success Jan 27 10:47:26.154: INFO: Pod "pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:47:26.167: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 27 10:47:27.062: INFO: Waiting for pod pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005 to disappear Jan 27 10:47:27.093: INFO: Pod pod-configmaps-6097faca-40f2-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:47:27.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8pjt2" for this suite. Jan 27 10:47:33.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:47:33.487: INFO: namespace: e2e-tests-configmap-8pjt2, resource: bindings, ignored listing per whitelist Jan 27 10:47:33.515: INFO: namespace e2e-tests-configmap-8pjt2 deletion completed in 6.315034977s • [SLOW TEST:20.194 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:47:33.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-s9cx2 Jan 27 10:47:45.785: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-s9cx2 STEP: checking the pod's current state and verifying that restartCount is present Jan 27 10:47:45.791: INFO: Initial restart count of pod liveness-http is 0 Jan 27 10:48:03.968: INFO: Restart count of pod e2e-tests-container-probe-s9cx2/liveness-http is now 1 (18.177893156s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:48:04.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s9cx2" for this suite. Jan 27 10:48:10.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:48:10.240: INFO: namespace: e2e-tests-container-probe-s9cx2, resource: bindings, ignored listing per whitelist Jan 27 10:48:10.266: INFO: namespace e2e-tests-container-probe-s9cx2 deletion completed in 6.20366725s • [SLOW TEST:36.750 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:48:10.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 10:48:10.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-ql8z2" to be "success or failure" Jan 27 10:48:10.501: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217702ms Jan 27 10:48:12.530: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043020737s Jan 27 10:48:14.539: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051715037s Jan 27 10:48:17.009: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521890634s Jan 27 10:48:19.157: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669530902s Jan 27 10:48:21.173: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.685565808s STEP: Saw pod success Jan 27 10:48:21.173: INFO: Pod "downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:48:21.186: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 10:48:21.534: INFO: Waiting for pod downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005 to disappear Jan 27 10:48:22.622: INFO: Pod downwardapi-volume-8287d3f6-40f2-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:48:22.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ql8z2" for this suite. Jan 27 10:48:28.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:48:29.221: INFO: namespace: e2e-tests-projected-ql8z2, resource: bindings, ignored listing per whitelist Jan 27 10:48:29.274: INFO: namespace e2e-tests-projected-ql8z2 deletion completed in 6.639155363s • [SLOW TEST:19.008 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:48:29.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 27 10:48:29.576: INFO: Waiting up to 5m0s for pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005" in namespace "e2e-tests-containers-clpbm" to be "success or failure" Jan 27 10:48:29.603: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.533554ms Jan 27 10:48:31.917: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340491825s Jan 27 10:48:33.941: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364178885s Jan 27 10:48:35.980: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403558806s Jan 27 10:48:38.001: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424510661s Jan 27 10:48:40.457: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880879829s STEP: Saw pod success Jan 27 10:48:40.457: INFO: Pod "client-containers-8dede506-40f2-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:48:40.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8dede506-40f2-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 10:48:40.883: INFO: Waiting for pod client-containers-8dede506-40f2-11ea-b652-0242ac110005 to disappear Jan 27 10:48:40.888: INFO: Pod client-containers-8dede506-40f2-11ea-b652-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:48:40.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-clpbm" for this suite. Jan 27 10:48:46.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:48:47.115: INFO: namespace: e2e-tests-containers-clpbm, resource: bindings, ignored listing per whitelist Jan 27 10:48:47.115: INFO: namespace e2e-tests-containers-clpbm deletion completed in 6.220797785s • [SLOW TEST:17.841 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:48:47.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-n5smr Jan 27 10:48:58.045: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-n5smr STEP: checking the pod's current state and verifying that restartCount is present Jan 27 10:48:58.052: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:52:58.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n5smr" for this suite. Jan 27 10:53:06.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:53:06.685: INFO: namespace: e2e-tests-container-probe-n5smr, resource: bindings, ignored listing per whitelist Jan 27 10:53:06.716: INFO: namespace e2e-tests-container-probe-n5smr deletion completed in 8.308270279s • [SLOW TEST:259.600 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:53:06.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 27 10:53:06.940: INFO: Waiting up to 5m0s for pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-7r4tl" to be "success or failure" Jan 27 10:53:06.945: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769577ms Jan 27 10:53:08.957: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017211525s Jan 27 10:53:10.979: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039043833s Jan 27 10:53:13.004: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064211312s Jan 27 10:53:15.024: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083631506s Jan 27 10:53:17.061: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121457802s STEP: Saw pod success Jan 27 10:53:17.062: INFO: Pod "pod-333fbc4e-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:53:17.076: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-333fbc4e-40f3-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 10:53:17.166: INFO: Waiting for pod pod-333fbc4e-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:53:17.198: INFO: Pod pod-333fbc4e-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:53:17.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7r4tl" for this suite. Jan 27 10:53:23.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:53:23.482: INFO: namespace: e2e-tests-emptydir-7r4tl, resource: bindings, ignored listing per whitelist Jan 27 10:53:23.535: INFO: namespace e2e-tests-emptydir-7r4tl deletion completed in 6.327859457s • [SLOW TEST:16.819 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:53:23.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 27 10:53:23.826: INFO: Waiting up to 5m0s for pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-var-expansion-svcpl" to be "success or failure" Jan 27 10:53:23.843: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.198328ms Jan 27 10:53:25.893: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06683902s Jan 27 10:53:27.980: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153559073s Jan 27 10:53:30.331: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504927032s Jan 27 10:53:32.574: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.747473189s Jan 27 10:53:34.615: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.788799091s STEP: Saw pod success Jan 27 10:53:34.616: INFO: Pod "var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:53:34.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005 container dapi-container: STEP: delete the pod Jan 27 10:53:35.147: INFO: Waiting for pod var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:53:35.168: INFO: Pod var-expansion-3d4bb4c7-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:53:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-svcpl" for this suite. Jan 27 10:53:41.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:53:41.406: INFO: namespace: e2e-tests-var-expansion-svcpl, resource: bindings, ignored listing per whitelist Jan 27 10:53:41.411: INFO: namespace e2e-tests-var-expansion-svcpl deletion completed in 6.206847816s • [SLOW TEST:17.876 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:53:41.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 27 10:53:41.593: INFO: Waiting up to 5m0s for pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-xtptz" to be "success or failure" Jan 27 10:53:41.719: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.326058ms Jan 27 10:53:43.746: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153373737s Jan 27 10:53:45.762: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168824479s Jan 27 10:53:48.308: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715153398s Jan 27 10:53:50.326: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.733053498s Jan 27 10:53:52.511: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.918319846s STEP: Saw pod success Jan 27 10:53:52.511: INFO: Pod "pod-47e63ccc-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:53:53.218: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-47e63ccc-40f3-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 10:53:53.542: INFO: Waiting for pod pod-47e63ccc-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:53:53.614: INFO: Pod pod-47e63ccc-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:53:53.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xtptz" for this suite. Jan 27 10:53:59.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:53:59.783: INFO: namespace: e2e-tests-emptydir-xtptz, resource: bindings, ignored listing per whitelist Jan 27 10:53:59.889: INFO: namespace e2e-tests-emptydir-xtptz deletion completed in 6.226157144s • [SLOW TEST:18.478 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:53:59.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005 Jan 27 10:54:00.116: INFO: Pod name my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005: Found 0 pods out of 1 Jan 27 10:54:05.635: INFO: Pod name my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005: Found 1 pods out of 1 Jan 27 10:54:05.635: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005" are running Jan 27 10:54:10.591: INFO: Pod "my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005-x6p4m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 10:54:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 10:54:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 10:54:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 10:54:00 +0000 UTC Reason: Message:}]) Jan 27 10:54:10.591: INFO: Trying to dial the pod Jan 27 10:54:15.634: INFO: Controller my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005: Got expected result from replica 1 [my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005-x6p4m]: "my-hostname-basic-52f17e99-40f3-11ea-b652-0242ac110005-x6p4m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:54:15.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5fj74" for this suite. Jan 27 10:54:21.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:54:21.834: INFO: namespace: e2e-tests-replication-controller-5fj74, resource: bindings, ignored listing per whitelist Jan 27 10:54:21.901: INFO: namespace e2e-tests-replication-controller-5fj74 deletion completed in 6.256593947s • [SLOW TEST:22.011 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:54:21.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-zdccb/configmap-test-602388c1-40f3-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 10:54:22.413: INFO: Waiting up to 5m0s for pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-zdccb" to be "success or failure" Jan 27 10:54:22.451: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.750908ms Jan 27 10:54:24.492: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078609193s Jan 27 10:54:26.509: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095628205s Jan 27 10:54:28.515: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10237846s Jan 27 10:54:30.674: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260705888s Jan 27 10:54:32.689: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275657928s Jan 27 10:54:34.712: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.299036863s STEP: Saw pod success Jan 27 10:54:34.712: INFO: Pod "pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:54:34.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005 container env-test: STEP: delete the pod Jan 27 10:54:34.782: INFO: Waiting for pod pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:54:34.789: INFO: Pod pod-configmaps-6039f646-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:54:34.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zdccb" for this suite. Jan 27 10:54:40.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:54:41.064: INFO: namespace: e2e-tests-configmap-zdccb, resource: bindings, ignored listing per whitelist Jan 27 10:54:41.067: INFO: namespace e2e-tests-configmap-zdccb deletion completed in 6.220520136s • [SLOW TEST:19.166 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:54:41.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:54:54.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5dfx9" for this suite. Jan 27 10:55:20.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:55:20.585: INFO: namespace: e2e-tests-replication-controller-5dfx9, resource: bindings, ignored listing per whitelist Jan 27 10:55:20.788: INFO: namespace e2e-tests-replication-controller-5dfx9 deletion completed in 26.381435418s • [SLOW TEST:39.721 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:55:20.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 27 10:55:31.607: INFO: Successfully updated pod "annotationupdate8326596a-40f3-11ea-b652-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:55:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m5dxd" for this suite. Jan 27 10:56:00.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:56:00.145: INFO: namespace: e2e-tests-projected-m5dxd, resource: bindings, ignored listing per whitelist Jan 27 10:56:00.156: INFO: namespace e2e-tests-projected-m5dxd deletion completed in 24.221033117s • [SLOW TEST:39.368 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:56:00.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 27 10:56:13.442: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:56:14.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-dxbg4" for this suite. Jan 27 10:56:44.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:56:44.651: INFO: namespace: e2e-tests-replicaset-dxbg4, resource: bindings, ignored listing per whitelist Jan 27 10:56:44.691: INFO: namespace e2e-tests-replicaset-dxbg4 deletion completed in 30.127966452s • [SLOW TEST:44.535 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:56:44.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:56:54.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6wkpv" for this suite. Jan 27 10:57:38.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:57:39.076: INFO: namespace: e2e-tests-kubelet-test-6wkpv, resource: bindings, ignored listing per whitelist Jan 27 10:57:39.154: INFO: namespace e2e-tests-kubelet-test-6wkpv deletion completed in 44.192480296s • [SLOW TEST:54.462 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:57:39.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d59bea20-40f3-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 10:57:39.354: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-49sql" to be "success or failure" Jan 27 10:57:39.370: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.533681ms Jan 27 10:57:41.468: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113772291s Jan 27 10:57:43.498: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143675717s Jan 27 10:57:45.801: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446698998s Jan 27 10:57:47.956: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601876304s Jan 27 10:57:50.403: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.049026724s STEP: Saw pod success Jan 27 10:57:50.403: INFO: Pod "pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:57:50.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 27 10:57:50.954: INFO: Waiting for pod pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:57:50.965: INFO: Pod pod-projected-configmaps-d59ce524-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:57:50.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-49sql" for this suite. Jan 27 10:57:59.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:57:59.092: INFO: namespace: e2e-tests-projected-49sql, resource: bindings, ignored listing per whitelist Jan 27 10:57:59.225: INFO: namespace e2e-tests-projected-49sql deletion completed in 8.253077758s • [SLOW TEST:20.071 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:57:59.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e19eef2e-40f3-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 10:57:59.542: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-lx5cs" to be "success or failure" Jan 27 10:57:59.563: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.60809ms Jan 27 10:58:01.585: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04301861s Jan 27 10:58:03.599: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057416756s Jan 27 10:58:05.837: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295142137s Jan 27 10:58:07.856: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313910322s Jan 27 10:58:09.880: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.338125378s Jan 27 10:58:11.896: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.353910577s STEP: Saw pod success Jan 27 10:58:11.896: INFO: Pod "pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:58:11.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 27 10:58:12.804: INFO: Waiting for pod pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005 to disappear Jan 27 10:58:12.816: INFO: Pod pod-projected-configmaps-e19fd86d-40f3-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:58:12.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lx5cs" for this suite. Jan 27 10:58:18.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:58:19.101: INFO: namespace: e2e-tests-projected-lx5cs, resource: bindings, ignored listing per whitelist Jan 27 10:58:19.163: INFO: namespace e2e-tests-projected-lx5cs deletion completed in 6.33992807s • [SLOW TEST:19.938 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:58:19.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 27 10:58:19.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-d529b' Jan 27 10:58:21.139: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 27 10:58:21.139: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 27 10:58:25.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-d529b' Jan 27 10:58:25.422: INFO: stderr: "" Jan 27 10:58:25.422: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:58:25.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d529b" for this suite. Jan 27 10:58:31.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:58:31.954: INFO: namespace: e2e-tests-kubectl-d529b, resource: bindings, ignored listing per whitelist Jan 27 10:58:31.970: INFO: namespace e2e-tests-kubectl-d529b deletion completed in 6.531245879s • [SLOW TEST:12.806 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:58:31.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 10:58:32.202: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 27 10:58:37.491: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 27 10:58:41.840: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 27 10:58:41.936: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vmtlh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vmtlh/deployments/test-cleanup-deployment,UID:fae20594-40f3-11ea-a994-fa163e34d433,ResourceVersion:19622825,Generation:1,CreationTimestamp:2020-01-27 10:58:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 27 10:58:41.984: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jan 27 10:58:41.984: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 27 10:58:41.984: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-vmtlh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vmtlh/replicasets/test-cleanup-controller,UID:f51d3616-40f3-11ea-a994-fa163e34d433,ResourceVersion:19622827,Generation:1,CreationTimestamp:2020-01-27 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fae20594-40f3-11ea-a994-fa163e34d433 0xc001ce4ff7 0xc001ce4ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 27 10:58:42.016: INFO: Pod "test-cleanup-controller-dr2ht" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dr2ht,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-vmtlh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vmtlh/pods/test-cleanup-controller-dr2ht,UID:f5222aae-40f3-11ea-a994-fa163e34d433,ResourceVersion:19622822,Generation:0,CreationTimestamp:2020-01-27 10:58:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f51d3616-40f3-11ea-a994-fa163e34d433 0xc001ce5597 0xc001ce5598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7xxdf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7xxdf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7xxdf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ce5600} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ce5620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 10:58:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 10:58:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 10:58:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 10:58:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-27 10:58:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 10:58:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ecccd250e46266fa38b0432f6922a277d31ec73b2e504abfeb14c44eaedb4c2f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:58:42.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vmtlh" for this suite. Jan 27 10:58:56.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:58:56.425: INFO: namespace: e2e-tests-deployment-vmtlh, resource: bindings, ignored listing per whitelist Jan 27 10:58:56.680: INFO: namespace e2e-tests-deployment-vmtlh deletion completed in 14.650860731s • [SLOW TEST:24.711 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:58:56.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 27 10:58:57.004: INFO: Waiting up to 5m0s for pod "pod-03e439c3-40f4-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-qt7vj" to be "success or failure" Jan 27 10:58:57.010: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.98692ms Jan 27 10:58:59.017: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013110057s Jan 27 10:59:01.037: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032720242s Jan 27 10:59:03.051: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046852917s Jan 27 10:59:05.077: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07280346s Jan 27 10:59:07.091: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08742216s STEP: Saw pod success Jan 27 10:59:07.091: INFO: Pod "pod-03e439c3-40f4-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 10:59:07.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-03e439c3-40f4-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 10:59:07.212: INFO: Waiting for pod pod-03e439c3-40f4-11ea-b652-0242ac110005 to disappear Jan 27 10:59:07.220: INFO: Pod pod-03e439c3-40f4-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:59:07.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qt7vj" for this suite. Jan 27 10:59:13.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 10:59:13.515: INFO: namespace: e2e-tests-emptydir-qt7vj, resource: bindings, ignored listing per whitelist Jan 27 10:59:13.535: INFO: namespace e2e-tests-emptydir-qt7vj deletion completed in 6.304565016s • [SLOW TEST:16.854 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 10:59:13.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0127 10:59:59.850082 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 10:59:59.850: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 10:59:59.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jgrh7" for this suite. Jan 27 11:00:23.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:00:24.112: INFO: namespace: e2e-tests-gc-jgrh7, resource: bindings, ignored listing per whitelist Jan 27 11:00:24.153: INFO: namespace e2e-tests-gc-jgrh7 deletion completed in 24.294331731s • [SLOW TEST:70.618 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:00:24.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:00:34.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xwk4f" for this suite. Jan 27 11:01:29.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:01:29.175: INFO: namespace: e2e-tests-kubelet-test-xwk4f, resource: bindings, ignored listing per whitelist Jan 27 11:01:29.259: INFO: namespace e2e-tests-kubelet-test-xwk4f deletion completed in 54.263293378s • [SLOW TEST:65.105 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:01:29.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 27 11:01:42.033: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5ec15963-40f4-11ea-b652-0242ac110005" Jan 27 11:01:42.034: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5ec15963-40f4-11ea-b652-0242ac110005" in namespace "e2e-tests-pods-p64m7" to be "terminated due to deadline exceeded" Jan 27 11:01:42.052: INFO: Pod "pod-update-activedeadlineseconds-5ec15963-40f4-11ea-b652-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 18.252187ms Jan 27 11:01:44.084: INFO: Pod "pod-update-activedeadlineseconds-5ec15963-40f4-11ea-b652-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.050771469s Jan 27 11:01:44.084: INFO: Pod "pod-update-activedeadlineseconds-5ec15963-40f4-11ea-b652-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:01:44.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-p64m7" for this suite. Jan 27 11:01:50.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:01:50.291: INFO: namespace: e2e-tests-pods-p64m7, resource: bindings, ignored listing per whitelist Jan 27 11:01:50.304: INFO: namespace e2e-tests-pods-p64m7 deletion completed in 6.200537702s • [SLOW TEST:21.046 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:01:50.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:02:00.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qgq9s" for this suite. Jan 27 11:02:55.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:02:55.150: INFO: namespace: e2e-tests-kubelet-test-qgq9s, resource: bindings, ignored listing per whitelist Jan 27 11:02:55.186: INFO: namespace e2e-tests-kubelet-test-qgq9s deletion completed in 54.284442337s • [SLOW TEST:64.881 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:02:55.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5dpbj [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5dpbj STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5dpbj Jan 27 11:02:55.398: INFO: Found 0 stateful pods, waiting for 1 Jan 27 11:03:05.414: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 27 11:03:05.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:03:06.079: INFO: stderr: "I0127 11:03:05.660790 87 log.go:172] (0xc0005ea2c0) (0xc000693360) Create stream\nI0127 11:03:05.660893 87 log.go:172] (0xc0005ea2c0) (0xc000693360) Stream added, broadcasting: 1\nI0127 11:03:05.666628 87 log.go:172] (0xc0005ea2c0) Reply frame received for 1\nI0127 11:03:05.666753 87 log.go:172] (0xc0005ea2c0) (0xc000803900) Create stream\nI0127 11:03:05.666816 87 log.go:172] (0xc0005ea2c0) (0xc000803900) Stream added, broadcasting: 3\nI0127 11:03:05.668274 87 log.go:172] (0xc0005ea2c0) Reply frame received for 3\nI0127 11:03:05.668314 87 log.go:172] (0xc0005ea2c0) (0xc00013a000) Create stream\nI0127 11:03:05.668326 87 log.go:172] (0xc0005ea2c0) (0xc00013a000) Stream added, broadcasting: 5\nI0127 11:03:05.669808 87 log.go:172] (0xc0005ea2c0) Reply frame received for 5\nI0127 11:03:05.958170 87 log.go:172] (0xc0005ea2c0) Data frame received for 3\nI0127 11:03:05.958239 87 log.go:172] (0xc000803900) (3) Data frame handling\nI0127 11:03:05.958260 87 log.go:172] (0xc000803900) (3) Data frame sent\nI0127 11:03:06.072585 87 log.go:172] (0xc0005ea2c0) Data frame received for 1\nI0127 11:03:06.072675 87 log.go:172] (0xc000693360) (1) Data frame handling\nI0127 11:03:06.072691 87 log.go:172] (0xc000693360) (1) Data frame sent\nI0127 11:03:06.072715 87 log.go:172] (0xc0005ea2c0) (0xc00013a000) Stream removed, broadcasting: 5\nI0127 11:03:06.072759 87 log.go:172] (0xc0005ea2c0) (0xc000803900) Stream removed, broadcasting: 3\nI0127 11:03:06.072774 87 log.go:172] (0xc0005ea2c0) (0xc000693360) Stream removed, broadcasting: 1\nI0127 11:03:06.072809 87 log.go:172] (0xc0005ea2c0) Go away received\nI0127 11:03:06.072996 87 log.go:172] (0xc0005ea2c0) (0xc000693360) Stream removed, broadcasting: 1\nI0127 11:03:06.073014 87 log.go:172] (0xc0005ea2c0) (0xc000803900) Stream removed, broadcasting: 3\nI0127 11:03:06.073021 87 log.go:172] (0xc0005ea2c0) (0xc00013a000) Stream removed, broadcasting: 5\n" Jan 27 11:03:06.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:03:06.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 11:03:06.095: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 11:03:16.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 11:03:16.115: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 11:03:16.179: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:16.179: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:16.179: INFO: Jan 27 11:03:16.179: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 27 11:03:17.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.958357547s Jan 27 11:03:18.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.941866692s Jan 27 11:03:19.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.922291977s Jan 27 11:03:20.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.891642194s Jan 27 11:03:21.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.875762118s Jan 27 11:03:22.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.858407453s Jan 27 11:03:23.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.833953196s Jan 27 11:03:24.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.672334648s Jan 27 11:03:25.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 375.064994ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5dpbj Jan 27 11:03:26.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:03:27.685: INFO: stderr: "I0127 11:03:27.163657 110 log.go:172] (0xc0006f42c0) (0xc00079e640) Create stream\nI0127 11:03:27.163880 110 log.go:172] (0xc0006f42c0) (0xc00079e640) Stream added, broadcasting: 1\nI0127 11:03:27.169586 110 log.go:172] (0xc0006f42c0) Reply frame received for 1\nI0127 11:03:27.169622 110 log.go:172] (0xc0006f42c0) (0xc00058cd20) Create stream\nI0127 11:03:27.169635 110 log.go:172] (0xc0006f42c0) (0xc00058cd20) Stream added, broadcasting: 3\nI0127 11:03:27.170868 110 log.go:172] (0xc0006f42c0) Reply frame received for 3\nI0127 11:03:27.170896 110 log.go:172] (0xc0006f42c0) (0xc000524000) Create stream\nI0127 11:03:27.170927 110 log.go:172] (0xc0006f42c0) (0xc000524000) Stream added, broadcasting: 5\nI0127 11:03:27.171971 110 log.go:172] (0xc0006f42c0) Reply frame received for 5\nI0127 11:03:27.526022 110 log.go:172] (0xc0006f42c0) Data frame received for 3\nI0127 11:03:27.526137 110 log.go:172] (0xc00058cd20) (3) Data frame handling\nI0127 11:03:27.526179 110 log.go:172] (0xc00058cd20) (3) Data frame sent\nI0127 11:03:27.677248 110 log.go:172] (0xc0006f42c0) (0xc00058cd20) Stream removed, broadcasting: 3\nI0127 11:03:27.677387 110 log.go:172] (0xc0006f42c0) Data frame received for 1\nI0127 11:03:27.677411 110 log.go:172] (0xc00079e640) (1) Data frame handling\nI0127 11:03:27.677419 110 log.go:172] (0xc00079e640) (1) Data frame sent\nI0127 11:03:27.677522 110 log.go:172] (0xc0006f42c0) (0xc00079e640) Stream removed, broadcasting: 1\nI0127 11:03:27.677734 110 log.go:172] (0xc0006f42c0) (0xc000524000) Stream removed, broadcasting: 5\nI0127 11:03:27.677772 110 log.go:172] (0xc0006f42c0) (0xc00079e640) Stream removed, broadcasting: 1\nI0127 11:03:27.677785 110 log.go:172] (0xc0006f42c0) (0xc00058cd20) Stream removed, broadcasting: 3\nI0127 11:03:27.677793 110 log.go:172] (0xc0006f42c0) (0xc000524000) Stream removed, broadcasting: 5\nI0127 11:03:27.678004 110 log.go:172] (0xc0006f42c0) Go away received\n" Jan 27 11:03:27.685: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 11:03:27.685: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 11:03:27.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:03:28.148: INFO: stderr: "I0127 11:03:27.880612 132 log.go:172] (0xc000718370) (0xc0007965a0) Create stream\nI0127 11:03:27.880767 132 log.go:172] (0xc000718370) (0xc0007965a0) Stream added, broadcasting: 1\nI0127 11:03:27.885138 132 log.go:172] (0xc000718370) Reply frame received for 1\nI0127 11:03:27.885171 132 log.go:172] (0xc000718370) (0xc0005a2e60) Create stream\nI0127 11:03:27.885183 132 log.go:172] (0xc000718370) (0xc0005a2e60) Stream added, broadcasting: 3\nI0127 11:03:27.886136 132 log.go:172] (0xc000718370) Reply frame received for 3\nI0127 11:03:27.886163 132 log.go:172] (0xc000718370) (0xc000544000) Create stream\nI0127 11:03:27.886172 132 log.go:172] (0xc000718370) (0xc000544000) Stream added, broadcasting: 5\nI0127 11:03:27.887070 132 log.go:172] (0xc000718370) Reply frame received for 5\nI0127 11:03:27.998501 132 log.go:172] (0xc000718370) Data frame received for 3\nI0127 11:03:27.998571 132 log.go:172] (0xc0005a2e60) (3) Data frame handling\nI0127 11:03:27.998586 132 log.go:172] (0xc0005a2e60) (3) Data frame sent\nI0127 11:03:27.998632 132 log.go:172] (0xc000718370) Data frame received for 5\nI0127 11:03:27.998640 132 log.go:172] (0xc000544000) (5) Data frame handling\nI0127 11:03:27.998648 132 log.go:172] (0xc000544000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0127 11:03:28.140691 132 log.go:172] (0xc000718370) Data frame received for 1\nI0127 11:03:28.140765 132 log.go:172] (0xc0007965a0) (1) Data frame handling\nI0127 11:03:28.140790 132 log.go:172] (0xc0007965a0) (1) Data frame sent\nI0127 11:03:28.140807 132 log.go:172] (0xc000718370) (0xc0007965a0) Stream removed, broadcasting: 1\nI0127 11:03:28.140961 132 log.go:172] (0xc000718370) (0xc000544000) Stream removed, broadcasting: 5\nI0127 11:03:28.141027 132 log.go:172] (0xc000718370) (0xc0005a2e60) Stream removed, broadcasting: 3\nI0127 11:03:28.141221 132 log.go:172] (0xc000718370) (0xc0007965a0) Stream removed, broadcasting: 1\nI0127 11:03:28.141248 132 log.go:172] (0xc000718370) (0xc0005a2e60) Stream removed, broadcasting: 3\nI0127 11:03:28.141263 132 log.go:172] (0xc000718370) (0xc000544000) Stream removed, broadcasting: 5\n" Jan 27 11:03:28.148: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 11:03:28.148: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 11:03:28.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:03:28.839: INFO: stderr: "I0127 11:03:28.306992 152 log.go:172] (0xc0006ea370) (0xc00070e5a0) Create stream\nI0127 11:03:28.307124 152 log.go:172] (0xc0006ea370) (0xc00070e5a0) Stream added, broadcasting: 1\nI0127 11:03:28.312280 152 log.go:172] (0xc0006ea370) Reply frame received for 1\nI0127 11:03:28.312312 152 log.go:172] (0xc0006ea370) (0xc0007bcdc0) Create stream\nI0127 11:03:28.312321 152 log.go:172] (0xc0006ea370) (0xc0007bcdc0) Stream added, broadcasting: 3\nI0127 11:03:28.313514 152 log.go:172] (0xc0006ea370) Reply frame received for 3\nI0127 11:03:28.313534 152 log.go:172] (0xc0006ea370) (0xc00070e640) Create stream\nI0127 11:03:28.313539 152 log.go:172] (0xc0006ea370) (0xc00070e640) Stream added, broadcasting: 5\nI0127 11:03:28.314392 152 log.go:172] (0xc0006ea370) Reply frame received for 5\nI0127 11:03:28.428825 152 log.go:172] (0xc0006ea370) Data frame received for 3\nI0127 11:03:28.428882 152 log.go:172] (0xc0007bcdc0) (3) Data frame handling\nI0127 11:03:28.428912 152 log.go:172] (0xc0007bcdc0) (3) Data frame sent\nI0127 11:03:28.428951 152 log.go:172] (0xc0006ea370) Data frame received for 5\nI0127 11:03:28.428969 152 log.go:172] (0xc00070e640) (5) Data frame handling\nI0127 11:03:28.428982 152 log.go:172] (0xc00070e640) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0127 11:03:28.833528 152 log.go:172] (0xc0006ea370) Data frame received for 1\nI0127 11:03:28.833645 152 log.go:172] (0xc0006ea370) (0xc00070e640) Stream removed, broadcasting: 5\nI0127 11:03:28.833678 152 log.go:172] (0xc00070e5a0) (1) Data frame handling\nI0127 11:03:28.833696 152 log.go:172] (0xc00070e5a0) (1) Data frame sent\nI0127 11:03:28.833790 152 log.go:172] (0xc0006ea370) (0xc0007bcdc0) Stream removed, broadcasting: 3\nI0127 11:03:28.833957 152 log.go:172] (0xc0006ea370) (0xc00070e5a0) Stream removed, broadcasting: 1\nI0127 11:03:28.833997 152 log.go:172] (0xc0006ea370) Go away received\nI0127 11:03:28.834442 152 log.go:172] (0xc0006ea370) (0xc00070e5a0) Stream removed, broadcasting: 1\nI0127 11:03:28.834454 152 log.go:172] (0xc0006ea370) (0xc0007bcdc0) Stream removed, broadcasting: 3\nI0127 11:03:28.834459 152 log.go:172] (0xc0006ea370) (0xc00070e640) Stream removed, broadcasting: 5\n" Jan 27 11:03:28.839: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 11:03:28.839: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 11:03:28.855: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:03:28.855: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 11:03:38.872: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:03:38.872: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:03:38.872: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 27 11:03:38.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:03:39.536: INFO: stderr: "I0127 11:03:39.097286 174 log.go:172] (0xc00088a2c0) (0xc000637360) Create stream\nI0127 11:03:39.097721 174 log.go:172] (0xc00088a2c0) (0xc000637360) Stream added, broadcasting: 1\nI0127 11:03:39.126509 174 log.go:172] (0xc00088a2c0) Reply frame received for 1\nI0127 11:03:39.126738 174 log.go:172] (0xc00088a2c0) (0xc000704000) Create stream\nI0127 11:03:39.126779 174 log.go:172] (0xc00088a2c0) (0xc000704000) Stream added, broadcasting: 3\nI0127 11:03:39.131385 174 log.go:172] (0xc00088a2c0) Reply frame received for 3\nI0127 11:03:39.131441 174 log.go:172] (0xc00088a2c0) (0xc000637400) Create stream\nI0127 11:03:39.131469 174 log.go:172] (0xc00088a2c0) (0xc000637400) Stream added, broadcasting: 5\nI0127 11:03:39.133179 174 log.go:172] (0xc00088a2c0) Reply frame received for 5\nI0127 11:03:39.383500 174 log.go:172] (0xc00088a2c0) Data frame received for 3\nI0127 11:03:39.383640 174 log.go:172] (0xc000704000) (3) Data frame handling\nI0127 11:03:39.383664 174 log.go:172] (0xc000704000) (3) Data frame sent\nI0127 11:03:39.524318 174 log.go:172] (0xc00088a2c0) Data frame received for 1\nI0127 11:03:39.524408 174 log.go:172] (0xc000637360) (1) Data frame handling\nI0127 11:03:39.524437 174 log.go:172] (0xc000637360) (1) Data frame sent\nI0127 11:03:39.524856 174 log.go:172] (0xc00088a2c0) (0xc000637360) Stream removed, broadcasting: 1\nI0127 11:03:39.525235 174 log.go:172] (0xc00088a2c0) (0xc000704000) Stream removed, broadcasting: 3\nI0127 11:03:39.527870 174 log.go:172] (0xc00088a2c0) (0xc000637400) Stream removed, broadcasting: 5\nI0127 11:03:39.527924 174 log.go:172] (0xc00088a2c0) (0xc000637360) Stream removed, broadcasting: 1\nI0127 11:03:39.527944 174 log.go:172] (0xc00088a2c0) (0xc000704000) Stream removed, broadcasting: 3\nI0127 11:03:39.527952 174 log.go:172] (0xc00088a2c0) (0xc000637400) Stream removed, broadcasting: 5\n" Jan 27 11:03:39.536: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:03:39.536: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 11:03:39.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:03:40.119: INFO: stderr: "I0127 11:03:39.807360 196 log.go:172] (0xc0006c2370) (0xc000593220) Create stream\nI0127 11:03:39.807643 196 log.go:172] (0xc0006c2370) (0xc000593220) Stream added, broadcasting: 1\nI0127 11:03:39.813164 196 log.go:172] (0xc0006c2370) Reply frame received for 1\nI0127 11:03:39.813202 196 log.go:172] (0xc0006c2370) (0xc000700000) Create stream\nI0127 11:03:39.813211 196 log.go:172] (0xc0006c2370) (0xc000700000) Stream added, broadcasting: 3\nI0127 11:03:39.815053 196 log.go:172] (0xc0006c2370) Reply frame received for 3\nI0127 11:03:39.815083 196 log.go:172] (0xc0006c2370) (0xc0005932c0) Create stream\nI0127 11:03:39.815092 196 log.go:172] (0xc0006c2370) (0xc0005932c0) Stream added, broadcasting: 5\nI0127 11:03:39.816879 196 log.go:172] (0xc0006c2370) Reply frame received for 5\nI0127 11:03:39.991249 196 log.go:172] (0xc0006c2370) Data frame received for 3\nI0127 11:03:39.991307 196 log.go:172] (0xc000700000) (3) Data frame handling\nI0127 11:03:39.991325 196 log.go:172] (0xc000700000) (3) Data frame sent\nI0127 11:03:40.114436 196 log.go:172] (0xc0006c2370) (0xc000700000) Stream removed, broadcasting: 3\nI0127 11:03:40.114594 196 log.go:172] (0xc0006c2370) Data frame received for 1\nI0127 11:03:40.114666 196 log.go:172] (0xc0006c2370) (0xc0005932c0) Stream removed, broadcasting: 5\nI0127 11:03:40.114723 196 log.go:172] (0xc000593220) (1) Data frame handling\nI0127 11:03:40.114758 196 log.go:172] (0xc000593220) (1) Data frame sent\nI0127 11:03:40.114780 196 log.go:172] (0xc0006c2370) (0xc000593220) Stream removed, broadcasting: 1\nI0127 11:03:40.114799 196 log.go:172] (0xc0006c2370) Go away received\nI0127 11:03:40.115123 196 log.go:172] (0xc0006c2370) (0xc000593220) Stream removed, broadcasting: 1\nI0127 11:03:40.115155 196 log.go:172] (0xc0006c2370) (0xc000700000) Stream removed, broadcasting: 3\nI0127 11:03:40.115175 196 log.go:172] (0xc0006c2370) (0xc0005932c0) Stream removed, broadcasting: 5\n" Jan 27 11:03:40.119: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:03:40.119: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 11:03:40.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:03:40.687: INFO: stderr: "I0127 11:03:40.300666 217 log.go:172] (0xc00070a370) (0xc0007a8640) Create stream\nI0127 11:03:40.300880 217 log.go:172] (0xc00070a370) (0xc0007a8640) Stream added, broadcasting: 1\nI0127 11:03:40.306178 217 log.go:172] (0xc00070a370) Reply frame received for 1\nI0127 11:03:40.306216 217 log.go:172] (0xc00070a370) (0xc0007c8000) Create stream\nI0127 11:03:40.306231 217 log.go:172] (0xc00070a370) (0xc0007c8000) Stream added, broadcasting: 3\nI0127 11:03:40.307556 217 log.go:172] (0xc00070a370) Reply frame received for 3\nI0127 11:03:40.307582 217 log.go:172] (0xc00070a370) (0xc0005f0be0) Create stream\nI0127 11:03:40.307598 217 log.go:172] (0xc00070a370) (0xc0005f0be0) Stream added, broadcasting: 5\nI0127 11:03:40.308671 217 log.go:172] (0xc00070a370) Reply frame received for 5\nI0127 11:03:40.494274 217 log.go:172] (0xc00070a370) Data frame received for 3\nI0127 11:03:40.494612 217 log.go:172] (0xc0007c8000) (3) Data frame handling\nI0127 11:03:40.494875 217 log.go:172] (0xc0007c8000) (3) Data frame sent\nI0127 11:03:40.680280 217 log.go:172] (0xc00070a370) (0xc0007c8000) Stream removed, broadcasting: 3\nI0127 11:03:40.680504 217 log.go:172] (0xc00070a370) Data frame received for 1\nI0127 11:03:40.680518 217 log.go:172] (0xc00070a370) (0xc0005f0be0) Stream removed, broadcasting: 5\nI0127 11:03:40.680545 217 log.go:172] (0xc0007a8640) (1) Data frame handling\nI0127 11:03:40.680572 217 log.go:172] (0xc0007a8640) (1) Data frame sent\nI0127 11:03:40.680584 217 log.go:172] (0xc00070a370) (0xc0007a8640) Stream removed, broadcasting: 1\nI0127 11:03:40.680742 217 log.go:172] (0xc00070a370) Go away received\nI0127 11:03:40.681150 217 log.go:172] (0xc00070a370) (0xc0007a8640) Stream removed, broadcasting: 1\nI0127 11:03:40.681188 217 log.go:172] (0xc00070a370) (0xc0007c8000) Stream removed, broadcasting: 3\nI0127 11:03:40.681201 217 log.go:172] (0xc00070a370) (0xc0005f0be0) Stream removed, broadcasting: 5\n" Jan 27 11:03:40.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:03:40.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 11:03:40.688: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 11:03:40.711: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 11:03:50.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 11:03:50.758: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 11:03:50.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 11:03:50.795: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:50.795: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:50.795: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:50.795: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:50.795: INFO: Jan 27 11:03:50.795: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:53.206: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:53.206: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:53.206: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:53.206: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:53.206: INFO: Jan 27 11:03:53.206: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:54.248: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:54.248: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:54.248: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:54.248: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:54.248: INFO: Jan 27 11:03:54.248: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:55.301: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:55.301: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:55.301: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:55.301: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:55.301: INFO: Jan 27 11:03:55.301: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:56.339: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:56.339: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:56.339: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:56.339: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:56.339: INFO: Jan 27 11:03:56.339: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:58.247: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:58.247: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:58.247: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:58.247: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:58.247: INFO: Jan 27 11:03:58.247: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:03:59.264: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:03:59.265: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:03:59.265: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:59.265: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:03:59.265: INFO: Jan 27 11:03:59.265: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 11:04:00.401: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 11:04:00.401: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:02:55 +0000 UTC }] Jan 27 11:04:00.401: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:03:16 +0000 UTC }] Jan 27 11:04:00.401: INFO: Jan 27 11:04:00.401: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5dpbj Jan 27 11:04:01.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:01.631: INFO: rc: 1 Jan 27 11:04:01.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ea2d50 exit status 1 true [0xc000a2f2d8 0xc000a2f2f0 0xc000a2f308] [0xc000a2f2d8 0xc000a2f2f0 0xc000a2f308] [0xc000a2f2e8 0xc000a2f300] [0x935700 0x935700] 0xc000bdbaa0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 27 11:04:11.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:11.773: INFO: rc: 1 Jan 27 11:04:11.774: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2ea0 exit status 1 true [0xc000a2f310 0xc000a2f328 0xc000a2f340] [0xc000a2f310 0xc000a2f328 0xc000a2f340] [0xc000a2f320 0xc000a2f338] [0x935700 0x935700] 0xc000bdbd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:04:21.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:21.905: INFO: rc: 1 Jan 27 11:04:21.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e4bce0 exit status 1 true [0xc000b02870 0xc000b02888 0xc000b028a0] [0xc000b02870 0xc000b02888 0xc000b028a0] [0xc000b02880 0xc000b02898] [0x935700 0x935700] 0xc000a77f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:04:31.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:32.044: INFO: rc: 1 Jan 27 11:04:32.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b6270 exit status 1 true [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f5a08 0xc0004f5b50] [0x935700 0x935700] 0xc001ca8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:04:42.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:42.188: INFO: rc: 1 Jan 27 11:04:42.189: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ba2120 exit status 1 true [0xc000b02000 0xc000b02028 0xc000b02040] [0xc000b02000 0xc000b02028 0xc000b02040] [0xc000b02010 0xc000b02038] [0x935700 0x935700] 0xc000bfe420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:04:52.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:04:52.307: INFO: rc: 1 Jan 27 11:04:52.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ba2270 exit status 1 true [0xc000b02048 0xc000b02060 0xc000b02078] [0xc000b02048 0xc000b02060 0xc000b02078] [0xc000b02058 0xc000b02070] [0x935700 0x935700] 0xc000bfeba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:02.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:02.420: INFO: rc: 1 Jan 27 11:05:02.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2120 exit status 1 true [0xc000a2e038 0xc000a2e0d8 0xc000a2e0f8] [0xc000a2e038 0xc000a2e0d8 0xc000a2e0f8] [0xc000a2e0c0 0xc000a2e0f0] [0x935700 0x935700] 0xc000a763c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:12.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:12.574: INFO: rc: 1 Jan 27 11:05:12.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2240 exit status 1 true [0xc000a2e120 0xc000a2e1e0 0xc000a2e248] [0xc000a2e120 0xc000a2e1e0 0xc000a2e248] [0xc000a2e198 0xc000a2e200] [0x935700 0x935700] 0xc000a76660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:22.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:22.666: INFO: rc: 1 Jan 27 11:05:22.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ba2390 exit status 1 true [0xc000b02080 0xc000b02098 0xc000b020b0] [0xc000b02080 0xc000b02098 0xc000b020b0] [0xc000b02090 0xc000b020a8] [0x935700 0x935700] 0xc000bfef00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:32.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:32.813: INFO: rc: 1 Jan 27 11:05:32.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001060180 exit status 1 true [0xc0003660d8 0xc0003664a8 0xc000366588] [0xc0003660d8 0xc0003664a8 0xc000366588] [0xc000366408 0xc000366540] [0x935700 0x935700] 0xc0010531a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:42.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:42.957: INFO: rc: 1 Jan 27 11:05:42.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010602d0 exit status 1 true [0xc0003665b0 0xc000366780 0xc000366900] [0xc0003665b0 0xc000366780 0xc000366900] [0xc000366758 0xc000366860] [0x935700 0x935700] 0xc001053a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:05:52.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:05:53.108: INFO: rc: 1 Jan 27 11:05:53.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b63c0 exit status 1 true [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5cc8 0xc0004f5d98] [0x935700 0x935700] 0xc001ca94a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:03.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:03.243: INFO: rc: 1 Jan 27 11:06:03.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ba2d50 exit status 1 true [0xc000b020b8 0xc000b020d0 0xc000b020e8] [0xc000b020b8 0xc000b020d0 0xc000b020e8] [0xc000b020c8 0xc000b020e0] [0x935700 0x935700] 0xc000bff440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:13.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:13.391: INFO: rc: 1 Jan 27 11:06:13.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b6570 exit status 1 true [0xc0004f5e20 0xc0009b2018 0xc0009b2040] [0xc0004f5e20 0xc0009b2018 0xc0009b2040] [0xc0004f5f20 0xc0009b2038] [0x935700 0x935700] 0xc00119c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:23.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:23.551: INFO: rc: 1 Jan 27 11:06:23.551: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2390 exit status 1 true [0xc000a2e250 0xc000a2e2e0 0xc000a2e358] [0xc000a2e250 0xc000a2e2e0 0xc000a2e358] [0xc000a2e298 0xc000a2e330] [0x935700 0x935700] 0xc000a76900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:33.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:33.725: INFO: rc: 1 Jan 27 11:06:33.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b62a0 exit status 1 true [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f5a08 0xc0004f5b50] [0x935700 0x935700] 0xc001ca8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:43.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:43.875: INFO: rc: 1 Jan 27 11:06:43.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2150 exit status 1 true [0xc0009b2018 0xc0009b2040 0xc0009b2080] [0xc0009b2018 0xc0009b2040 0xc0009b2080] [0xc0009b2038 0xc0009b2068] [0x935700 0x935700] 0xc00119c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:06:53.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:06:53.977: INFO: rc: 1 Jan 27 11:06:53.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d22d0 exit status 1 true [0xc0009b2088 0xc0009b20b8 0xc0009b20f0] [0xc0009b2088 0xc0009b20b8 0xc0009b20f0] [0xc0009b2098 0xc0009b20d8] [0x935700 0x935700] 0xc00119c480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:03.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:04.058: INFO: rc: 1 Jan 27 11:07:04.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b6480 exit status 1 true [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5cc8 0xc0004f5d98] [0x935700 0x935700] 0xc001ca94a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:14.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:14.141: INFO: rc: 1 Jan 27 11:07:14.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b6600 exit status 1 true [0xc0004f5e20 0xc000a2e038 0xc000a2e0d8] [0xc0004f5e20 0xc000a2e038 0xc000a2e0d8] [0xc0004f5f20 0xc000a2e0c0] [0x935700 0x935700] 0xc000a763c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:24.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:24.216: INFO: rc: 1 Jan 27 11:07:24.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001060120 exit status 1 true [0xc000b02000 0xc000b02028 0xc000b02040] [0xc000b02000 0xc000b02028 0xc000b02040] [0xc000b02010 0xc000b02038] [0x935700 0x935700] 0xc000bfe420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:34.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:34.307: INFO: rc: 1 Jan 27 11:07:34.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b6750 exit status 1 true [0xc000a2e0e0 0xc000a2e120 0xc000a2e1e0] [0xc000a2e0e0 0xc000a2e120 0xc000a2e1e0] [0xc000a2e0f8 0xc000a2e198] [0x935700 0x935700] 0xc000a76660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:44.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:44.447: INFO: rc: 1 Jan 27 11:07:44.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2480 exit status 1 true [0xc0009b2110 0xc0009b2138 0xc0009b2150] [0xc0009b2110 0xc0009b2138 0xc0009b2150] [0xc0009b2130 0xc0009b2148] [0x935700 0x935700] 0xc00119cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:07:54.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:07:54.588: INFO: rc: 1 Jan 27 11:07:54.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2600 exit status 1 true [0xc0009b2170 0xc0009b2198 0xc0009b21d8] [0xc0009b2170 0xc0009b2198 0xc0009b21d8] [0xc0009b2190 0xc0009b21c0] [0x935700 0x935700] 0xc00119d560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:04.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:04.713: INFO: rc: 1 Jan 27 11:08:04.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2720 exit status 1 true [0xc0009b21f0 0xc0009b2218 0xc0009b2230] [0xc0009b21f0 0xc0009b2218 0xc0009b2230] [0xc0009b2210 0xc0009b2228] [0x935700 0x935700] 0xc00119dc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:14.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:14.812: INFO: rc: 1 Jan 27 11:08:14.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001060270 exit status 1 true [0xc000b02048 0xc000b02060 0xc000b02078] [0xc000b02048 0xc000b02060 0xc000b02078] [0xc000b02058 0xc000b02070] [0x935700 0x935700] 0xc000bfeba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:24.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:24.938: INFO: rc: 1 Jan 27 11:08:24.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010603f0 exit status 1 true [0xc000b02080 0xc000b02098 0xc000b020b0] [0xc000b02080 0xc000b02098 0xc000b020b0] [0xc000b02090 0xc000b020a8] [0x935700 0x935700] 0xc000bfef00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:34.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:35.085: INFO: rc: 1 Jan 27 11:08:35.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001060570 exit status 1 true [0xc000b020c0 0xc000b020d8 0xc000b020f0] [0xc000b020c0 0xc000b020d8 0xc000b020f0] [0xc000b020d0 0xc000b020e8] [0x935700 0x935700] 0xc000bff440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:45.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:45.211: INFO: rc: 1 Jan 27 11:08:45.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2120 exit status 1 true [0xc0004f5978 0xc0004f5a08 0xc0004f5b50] [0xc0004f5978 0xc0004f5a08 0xc0004f5b50] [0xc0004f59f8 0xc0004f5a30] [0x935700 0x935700] 0xc001ca8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:08:55.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:08:55.329: INFO: rc: 1 Jan 27 11:08:55.329: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014d2270 exit status 1 true [0xc0004f5c00 0xc0004f5cc8 0xc0004f5d98] [0xc0004f5c00 0xc0004f5cc8 0xc0004f5d98] [0xc0004f5cc0 0xc0004f5d58] [0x935700 0x935700] 0xc001ca94a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 11:09:05.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5dpbj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:09:05.479: INFO: rc: 1 Jan 27 11:09:05.479: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 27 11:09:05.479: INFO: Scaling statefulset ss to 0 Jan 27 11:09:05.517: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 27 11:09:05.524: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5dpbj Jan 27 11:09:05.532: INFO: Scaling statefulset ss to 0 Jan 27 11:09:05.554: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 11:09:05.560: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:09:07.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5dpbj" for this suite. Jan 27 11:09:15.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:09:15.715: INFO: namespace: e2e-tests-statefulset-5dpbj, resource: bindings, ignored listing per whitelist Jan 27 11:09:15.776: INFO: namespace e2e-tests-statefulset-5dpbj deletion completed in 8.627094983s • [SLOW TEST:380.589 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:09:15.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 27 11:09:16.081: INFO: Waiting up to 5m0s for pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-87qhh" to be "success or failure" Jan 27 11:09:16.113: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.195796ms Jan 27 11:09:18.127: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045577317s Jan 27 11:09:20.174: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092529073s Jan 27 11:09:22.258: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176933538s Jan 27 11:09:24.286: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204287455s Jan 27 11:09:26.305: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.223770992s STEP: Saw pod success Jan 27 11:09:26.305: INFO: Pod "pod-74e5cf71-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:09:26.363: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-74e5cf71-40f5-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 11:09:27.717: INFO: Waiting for pod pod-74e5cf71-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:09:27.802: INFO: Pod pod-74e5cf71-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:09:27.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-87qhh" for this suite. Jan 27 11:09:33.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:09:34.148: INFO: namespace: e2e-tests-emptydir-87qhh, resource: bindings, ignored listing per whitelist Jan 27 11:09:34.152: INFO: namespace e2e-tests-emptydir-87qhh deletion completed in 6.33771883s • [SLOW TEST:18.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:09:34.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 27 11:09:34.421: INFO: Waiting up to 5m0s for pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-6ccsb" to be "success or failure" Jan 27 11:09:34.429: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.478689ms Jan 27 11:09:36.443: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02210622s Jan 27 11:09:38.474: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053285503s Jan 27 11:09:40.491: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070130988s Jan 27 11:09:42.515: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093494911s Jan 27 11:09:44.568: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146709687s STEP: Saw pod success Jan 27 11:09:44.568: INFO: Pod "downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:09:44.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005 container dapi-container: STEP: delete the pod Jan 27 11:09:44.933: INFO: Waiting for pod downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:09:44.998: INFO: Pod downward-api-7fd5d73d-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:09:44.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ccsb" for this suite. Jan 27 11:09:51.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:09:51.205: INFO: namespace: e2e-tests-downward-api-6ccsb, resource: bindings, ignored listing per whitelist Jan 27 11:09:51.240: INFO: namespace e2e-tests-downward-api-6ccsb deletion completed in 6.231893546s • [SLOW TEST:17.087 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:09:51.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:09:51.497: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 27 11:09:51.530: INFO: Number of nodes with available pods: 0 Jan 27 11:09:51.530: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:52.571: INFO: Number of nodes with available pods: 0 Jan 27 11:09:52.571: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:53.672: INFO: Number of nodes with available pods: 0 Jan 27 11:09:53.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:54.584: INFO: Number of nodes with available pods: 0 Jan 27 11:09:54.584: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:55.566: INFO: Number of nodes with available pods: 0 Jan 27 11:09:55.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:56.765: INFO: Number of nodes with available pods: 0 Jan 27 11:09:56.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:58.093: INFO: Number of nodes with available pods: 0 Jan 27 11:09:58.093: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:58.669: INFO: Number of nodes with available pods: 0 Jan 27 11:09:58.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:09:59.588: INFO: Number of nodes with available pods: 0 Jan 27 11:09:59.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:00.597: INFO: Number of nodes with available pods: 0 Jan 27 11:10:00.597: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:01.848: INFO: Number of nodes with available pods: 1 Jan 27 11:10:01.848: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 27 11:10:02.027: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:03.056: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:04.052: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:05.098: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:06.061: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:07.064: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:08.045: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:09.043: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:09.043: INFO: Pod daemon-set-5t6r5 is not available Jan 27 11:10:10.042: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:10.042: INFO: Pod daemon-set-5t6r5 is not available Jan 27 11:10:11.045: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:11.045: INFO: Pod daemon-set-5t6r5 is not available Jan 27 11:10:12.040: INFO: Wrong image for pod: daemon-set-5t6r5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 27 11:10:12.040: INFO: Pod daemon-set-5t6r5 is not available Jan 27 11:10:13.048: INFO: Pod daemon-set-9vlr6 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 27 11:10:13.075: INFO: Number of nodes with available pods: 0 Jan 27 11:10:13.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:14.485: INFO: Number of nodes with available pods: 0 Jan 27 11:10:14.486: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:15.097: INFO: Number of nodes with available pods: 0 Jan 27 11:10:15.097: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:16.089: INFO: Number of nodes with available pods: 0 Jan 27 11:10:16.089: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:17.111: INFO: Number of nodes with available pods: 0 Jan 27 11:10:17.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:18.148: INFO: Number of nodes with available pods: 0 Jan 27 11:10:18.148: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:19.158: INFO: Number of nodes with available pods: 0 Jan 27 11:10:19.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:20.127: INFO: Number of nodes with available pods: 0 Jan 27 11:10:20.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:21.102: INFO: Number of nodes with available pods: 0 Jan 27 11:10:21.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:10:22.090: INFO: Number of nodes with available pods: 1 Jan 27 11:10:22.090: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-llgk8, will wait for the garbage collector to delete the pods Jan 27 11:10:22.318: INFO: Deleting DaemonSet.extensions daemon-set took: 150.881856ms Jan 27 11:10:22.419: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.30291ms Jan 27 11:10:32.734: INFO: Number of nodes with available pods: 0 Jan 27 11:10:32.734: INFO: Number of running nodes: 0, number of available pods: 0 Jan 27 11:10:32.746: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-llgk8/daemonsets","resourceVersion":"19624231"},"items":null} Jan 27 11:10:32.780: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-llgk8/pods","resourceVersion":"19624231"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:10:32.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-llgk8" for this suite. Jan 27 11:10:38.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:10:38.981: INFO: namespace: e2e-tests-daemonsets-llgk8, resource: bindings, ignored listing per whitelist Jan 27 11:10:39.048: INFO: namespace e2e-tests-daemonsets-llgk8 deletion completed in 6.177312588s • [SLOW TEST:47.808 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:10:39.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 27 11:10:39.277: INFO: Waiting up to 5m0s for pod "pod-a67bb765-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-bdcgq" to be "success or failure" Jan 27 11:10:39.287: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.859388ms Jan 27 11:10:41.305: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027966417s Jan 27 11:10:43.327: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049870689s Jan 27 11:10:45.419: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141978356s Jan 27 11:10:47.434: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157450658s Jan 27 11:10:49.462: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185032698s STEP: Saw pod success Jan 27 11:10:49.462: INFO: Pod "pod-a67bb765-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:10:49.469: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a67bb765-40f5-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 11:10:50.629: INFO: Waiting for pod pod-a67bb765-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:10:50.802: INFO: Pod pod-a67bb765-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:10:50.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bdcgq" for this suite. Jan 27 11:10:56.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:10:56.904: INFO: namespace: e2e-tests-emptydir-bdcgq, resource: bindings, ignored listing per whitelist Jan 27 11:10:57.118: INFO: namespace e2e-tests-emptydir-bdcgq deletion completed in 6.298328603s • [SLOW TEST:18.069 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:10:57.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:10:57.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-47xsn" to be "success or failure" Jan 27 11:10:57.485: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.982281ms Jan 27 11:10:59.638: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163836795s Jan 27 11:11:01.654: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179649343s Jan 27 11:11:03.762: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28816184s Jan 27 11:11:05.789: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314734306s Jan 27 11:11:07.801: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327006518s STEP: Saw pod success Jan 27 11:11:07.801: INFO: Pod "downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:11:07.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:11:07.900: INFO: Waiting for pod downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:11:07.912: INFO: Pod downwardapi-volume-b1557ec5-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:11:07.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-47xsn" for this suite. Jan 27 11:11:13.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:11:14.125: INFO: namespace: e2e-tests-projected-47xsn, resource: bindings, ignored listing per whitelist Jan 27 11:11:14.200: INFO: namespace e2e-tests-projected-47xsn deletion completed in 6.278722892s • [SLOW TEST:17.082 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:11:14.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 27 11:11:14.440: INFO: Waiting up to 5m0s for pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-containers-fkbtd" to be "success or failure" Jan 27 11:11:14.445: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.938314ms Jan 27 11:11:16.574: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134652251s Jan 27 11:11:18.656: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216195512s Jan 27 11:11:20.668: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228063275s Jan 27 11:11:22.681: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24126286s Jan 27 11:11:24.713: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.272798718s STEP: Saw pod success Jan 27 11:11:24.713: INFO: Pod "client-containers-bb738822-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:11:24.727: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-bb738822-40f5-11ea-b652-0242ac110005 container test-container: STEP: delete the pod Jan 27 11:11:24.887: INFO: Waiting for pod client-containers-bb738822-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:11:24.945: INFO: Pod client-containers-bb738822-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:11:24.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fkbtd" for this suite. Jan 27 11:11:32.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:11:33.039: INFO: namespace: e2e-tests-containers-fkbtd, resource: bindings, ignored listing per whitelist Jan 27 11:11:33.117: INFO: namespace e2e-tests-containers-fkbtd deletion completed in 8.161953316s • [SLOW TEST:18.917 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:11:33.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c6b3bff3-40f5-11ea-b652-0242ac110005 STEP: Creating a pod to test consume secrets Jan 27 11:11:33.324: INFO: Waiting up to 5m0s for pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-mxgxr" to be "success or failure" Jan 27 11:11:33.353: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.01827ms Jan 27 11:11:35.370: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046346253s Jan 27 11:11:37.395: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071275627s Jan 27 11:11:39.682: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358279471s Jan 27 11:11:41.961: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636977268s Jan 27 11:11:43.975: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.650987267s STEP: Saw pod success Jan 27 11:11:43.975: INFO: Pod "pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:11:43.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 27 11:11:44.323: INFO: Waiting for pod pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:11:44.350: INFO: Pod pod-secrets-c6b4cfee-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:11:44.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mxgxr" for this suite. Jan 27 11:11:50.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:11:50.618: INFO: namespace: e2e-tests-secrets-mxgxr, resource: bindings, ignored listing per whitelist Jan 27 11:11:50.707: INFO: namespace e2e-tests-secrets-mxgxr deletion completed in 6.345668565s • [SLOW TEST:17.589 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:11:50.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d12d5a45-40f5-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 11:11:50.891: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-242f4" to be "success or failure" Jan 27 11:11:50.995: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.143939ms Jan 27 11:11:53.348: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457032862s Jan 27 11:11:55.368: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477491686s Jan 27 11:11:57.571: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679700961s Jan 27 11:11:59.969: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.078194924s Jan 27 11:12:01.979: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.088405021s STEP: Saw pod success Jan 27 11:12:01.979: INFO: Pod "pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:12:01.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 27 11:12:03.007: INFO: Waiting for pod pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:12:03.054: INFO: Pod pod-projected-configmaps-d12ddcff-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:12:03.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-242f4" for this suite. Jan 27 11:12:09.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:12:09.410: INFO: namespace: e2e-tests-projected-242f4, resource: bindings, ignored listing per whitelist Jan 27 11:12:09.430: INFO: namespace e2e-tests-projected-242f4 deletion completed in 6.359071662s • [SLOW TEST:18.723 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:12:09.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:12:09.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-vhfxz" to be "success or failure" Jan 27 11:12:09.689: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.104764ms Jan 27 11:12:11.912: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267418408s Jan 27 11:12:13.951: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306339317s Jan 27 11:12:16.255: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610976063s Jan 27 11:12:18.335: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690820792s Jan 27 11:12:20.363: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.718288923s Jan 27 11:12:22.597: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.952436317s STEP: Saw pod success Jan 27 11:12:22.597: INFO: Pod "downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:12:22.607: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:12:22.737: INFO: Waiting for pod downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:12:22.772: INFO: Pod downwardapi-volume-dc5a5459-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:12:22.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vhfxz" for this suite. Jan 27 11:12:28.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:12:28.907: INFO: namespace: e2e-tests-downward-api-vhfxz, resource: bindings, ignored listing per whitelist Jan 27 11:12:28.971: INFO: namespace e2e-tests-downward-api-vhfxz deletion completed in 6.182130242s • [SLOW TEST:19.541 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:12:28.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e7f54d44-40f5-11ea-b652-0242ac110005 STEP: Creating a pod to test consume secrets Jan 27 11:12:29.250: INFO: Waiting up to 5m0s for pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-mp5bx" to be "success or failure" Jan 27 11:12:29.296: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.627424ms Jan 27 11:12:31.700: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449880833s Jan 27 11:12:33.719: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468338953s Jan 27 11:12:36.063: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81256249s Jan 27 11:12:38.078: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82770552s Jan 27 11:12:40.265: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.014604683s STEP: Saw pod success Jan 27 11:12:40.265: INFO: Pod "pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:12:40.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 27 11:12:40.912: INFO: Waiting for pod pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005 to disappear Jan 27 11:12:41.023: INFO: Pod pod-secrets-e7f6a3bb-40f5-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:12:41.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mp5bx" for this suite. Jan 27 11:12:47.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:12:47.174: INFO: namespace: e2e-tests-secrets-mp5bx, resource: bindings, ignored listing per whitelist Jan 27 11:12:47.215: INFO: namespace e2e-tests-secrets-mp5bx deletion completed in 6.177299759s • [SLOW TEST:18.243 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:12:47.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 27 11:12:47.462: INFO: Number of nodes with available pods: 0 Jan 27 11:12:47.462: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:49.271: INFO: Number of nodes with available pods: 0 Jan 27 11:12:49.271: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:49.497: INFO: Number of nodes with available pods: 0 Jan 27 11:12:49.497: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:50.507: INFO: Number of nodes with available pods: 0 Jan 27 11:12:50.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:51.488: INFO: Number of nodes with available pods: 0 Jan 27 11:12:51.488: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:52.538: INFO: Number of nodes with available pods: 0 Jan 27 11:12:52.538: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:53.479: INFO: Number of nodes with available pods: 0 Jan 27 11:12:53.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:54.520: INFO: Number of nodes with available pods: 0 Jan 27 11:12:54.521: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:55.485: INFO: Number of nodes with available pods: 0 Jan 27 11:12:55.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:56.498: INFO: Number of nodes with available pods: 0 Jan 27 11:12:56.498: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:57.486: INFO: Number of nodes with available pods: 1 Jan 27 11:12:57.486: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 27 11:12:57.541: INFO: Number of nodes with available pods: 0 Jan 27 11:12:57.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:58.601: INFO: Number of nodes with available pods: 0 Jan 27 11:12:58.601: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:12:59.565: INFO: Number of nodes with available pods: 0 Jan 27 11:12:59.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:00.603: INFO: Number of nodes with available pods: 0 Jan 27 11:13:00.603: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:01.559: INFO: Number of nodes with available pods: 0 Jan 27 11:13:01.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:02.582: INFO: Number of nodes with available pods: 0 Jan 27 11:13:02.582: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:03.565: INFO: Number of nodes with available pods: 0 Jan 27 11:13:03.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:04.591: INFO: Number of nodes with available pods: 0 Jan 27 11:13:04.591: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:05.559: INFO: Number of nodes with available pods: 0 Jan 27 11:13:05.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:06.592: INFO: Number of nodes with available pods: 0 Jan 27 11:13:06.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:07.560: INFO: Number of nodes with available pods: 0 Jan 27 11:13:07.560: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:08.668: INFO: Number of nodes with available pods: 0 Jan 27 11:13:08.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:09.566: INFO: Number of nodes with available pods: 0 Jan 27 11:13:09.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:10.996: INFO: Number of nodes with available pods: 0 Jan 27 11:13:10.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:12.335: INFO: Number of nodes with available pods: 0 Jan 27 11:13:12.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:12.597: INFO: Number of nodes with available pods: 0 Jan 27 11:13:12.597: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:13.557: INFO: Number of nodes with available pods: 0 Jan 27 11:13:13.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:14.587: INFO: Number of nodes with available pods: 0 Jan 27 11:13:14.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:13:15.584: INFO: Number of nodes with available pods: 1 Jan 27 11:13:15.584: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mzjrf, will wait for the garbage collector to delete the pods Jan 27 11:13:15.688: INFO: Deleting DaemonSet.extensions daemon-set took: 41.510351ms Jan 27 11:13:15.788: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.466109ms Jan 27 11:13:32.695: INFO: Number of nodes with available pods: 0 Jan 27 11:13:32.695: INFO: Number of running nodes: 0, number of available pods: 0 Jan 27 11:13:32.709: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mzjrf/daemonsets","resourceVersion":"19624670"},"items":null} Jan 27 11:13:32.715: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mzjrf/pods","resourceVersion":"19624670"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:13:32.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mzjrf" for this suite. Jan 27 11:13:38.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:13:38.947: INFO: namespace: e2e-tests-daemonsets-mzjrf, resource: bindings, ignored listing per whitelist Jan 27 11:13:38.962: INFO: namespace e2e-tests-daemonsets-mzjrf deletion completed in 6.23075009s • [SLOW TEST:51.747 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:13:38.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:13:39.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-4fv57" for this suite. Jan 27 11:13:45.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:13:45.418: INFO: namespace: e2e-tests-services-4fv57, resource: bindings, ignored listing per whitelist Jan 27 11:13:45.430: INFO: namespace e2e-tests-services-4fv57 deletion completed in 6.213645545s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.468 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:13:45.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-158db07c-40f6-11ea-b652-0242ac110005 STEP: Creating a pod to test consume secrets Jan 27 11:13:45.677: INFO: Waiting up to 5m0s for pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-8rxgm" to be "success or failure" Jan 27 11:13:45.684: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646149ms Jan 27 11:13:47.791: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114375742s Jan 27 11:13:49.807: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130323138s Jan 27 11:13:51.829: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151748422s Jan 27 11:13:53.978: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300635253s Jan 27 11:13:56.019: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.342141973s STEP: Saw pod success Jan 27 11:13:56.019: INFO: Pod "pod-secrets-15979a30-40f6-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:13:56.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-15979a30-40f6-11ea-b652-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 27 11:13:56.110: INFO: Waiting for pod pod-secrets-15979a30-40f6-11ea-b652-0242ac110005 to disappear Jan 27 11:13:56.170: INFO: Pod pod-secrets-15979a30-40f6-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:13:56.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8rxgm" for this suite. Jan 27 11:14:02.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:14:02.403: INFO: namespace: e2e-tests-secrets-8rxgm, resource: bindings, ignored listing per whitelist Jan 27 11:14:02.467: INFO: namespace e2e-tests-secrets-8rxgm deletion completed in 6.238768692s • [SLOW TEST:17.037 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:14:02.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-cvbj STEP: Creating a pod to test atomic-volume-subpath Jan 27 11:14:02.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cvbj" in namespace "e2e-tests-subpath-zhq7q" to be "success or failure" Jan 27 11:14:02.867: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.184425ms Jan 27 11:14:04.883: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039479007s Jan 27 11:14:06.938: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094160985s Jan 27 11:14:09.220: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.376102437s Jan 27 11:14:11.239: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395418065s Jan 27 11:14:13.251: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.407083332s Jan 27 11:14:15.265: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.421255818s Jan 27 11:14:17.287: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.442968423s Jan 27 11:14:19.301: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 16.457061215s Jan 27 11:14:21.313: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 18.468839013s Jan 27 11:14:23.327: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 20.483061996s Jan 27 11:14:25.345: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 22.501043007s Jan 27 11:14:27.362: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 24.517733114s Jan 27 11:14:29.391: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 26.546950639s Jan 27 11:14:31.424: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 28.580203094s Jan 27 11:14:33.439: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 30.595647981s Jan 27 11:14:35.469: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 32.624857514s Jan 27 11:14:37.518: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Running", Reason="", readiness=false. Elapsed: 34.674455366s Jan 27 11:14:39.864: INFO: Pod "pod-subpath-test-configmap-cvbj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.020271752s STEP: Saw pod success Jan 27 11:14:39.864: INFO: Pod "pod-subpath-test-configmap-cvbj" satisfied condition "success or failure" Jan 27 11:14:39.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-cvbj container test-container-subpath-configmap-cvbj: STEP: delete the pod Jan 27 11:14:40.083: INFO: Waiting for pod pod-subpath-test-configmap-cvbj to disappear Jan 27 11:14:40.133: INFO: Pod pod-subpath-test-configmap-cvbj no longer exists STEP: Deleting pod pod-subpath-test-configmap-cvbj Jan 27 11:14:40.133: INFO: Deleting pod "pod-subpath-test-configmap-cvbj" in namespace "e2e-tests-subpath-zhq7q" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:14:40.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zhq7q" for this suite. Jan 27 11:14:46.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:14:46.287: INFO: namespace: e2e-tests-subpath-zhq7q, resource: bindings, ignored listing per whitelist Jan 27 11:14:46.291: INFO: namespace e2e-tests-subpath-zhq7q deletion completed in 6.142422123s • [SLOW TEST:43.824 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:14:46.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jan 27 11:14:46.381: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix182117740/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:14:46.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bhp6p" for this suite. Jan 27 11:14:52.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:14:52.720: INFO: namespace: e2e-tests-kubectl-bhp6p, resource: bindings, ignored listing per whitelist Jan 27 11:14:52.755: INFO: namespace e2e-tests-kubectl-bhp6p deletion completed in 6.266686755s • [SLOW TEST:6.464 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:14:52.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 27 11:14:52.963: INFO: Waiting up to 5m0s for pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-249sv" to be "success or failure" Jan 27 11:14:52.976: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.218338ms Jan 27 11:14:55.291: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328235337s Jan 27 11:14:57.302: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338940598s Jan 27 11:14:59.318: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355196699s Jan 27 11:15:01.342: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378842882s Jan 27 11:15:03.356: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.39251368s STEP: Saw pod success Jan 27 11:15:03.356: INFO: Pod "downward-api-3db1f844-40f6-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:15:03.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3db1f844-40f6-11ea-b652-0242ac110005 container dapi-container: STEP: delete the pod Jan 27 11:15:03.583: INFO: Waiting for pod downward-api-3db1f844-40f6-11ea-b652-0242ac110005 to disappear Jan 27 11:15:03.623: INFO: Pod downward-api-3db1f844-40f6-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:15:03.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-249sv" for this suite. Jan 27 11:15:10.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:15:10.220: INFO: namespace: e2e-tests-downward-api-249sv, resource: bindings, ignored listing per whitelist Jan 27 11:15:10.242: INFO: namespace e2e-tests-downward-api-249sv deletion completed in 6.605502521s • [SLOW TEST:17.486 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:15:10.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 27 11:15:10.430: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:15:28.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rkrzr" for this suite. Jan 27 11:15:34.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:15:34.519: INFO: namespace: e2e-tests-init-container-rkrzr, resource: bindings, ignored listing per whitelist Jan 27 11:15:34.571: INFO: namespace e2e-tests-init-container-rkrzr deletion completed in 6.260927848s • [SLOW TEST:24.328 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:15:34.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:16:35.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-c2dk4" for this suite. Jan 27 11:16:41.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:16:41.349: INFO: namespace: e2e-tests-container-runtime-c2dk4, resource: bindings, ignored listing per whitelist Jan 27 11:16:41.379: INFO: namespace e2e-tests-container-runtime-c2dk4 deletion completed in 6.245773838s • [SLOW TEST:66.808 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:16:41.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:16:41.549: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 27 11:16:46.954: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 27 11:16:50.984: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 27 11:16:53.001: INFO: Creating deployment "test-rollover-deployment" Jan 27 11:16:53.026: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 27 11:16:55.051: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 27 11:16:55.075: INFO: Ensure that both replica sets have 1 created replica Jan 27 11:16:55.297: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 27 11:16:55.315: INFO: Updating deployment test-rollover-deployment Jan 27 11:16:55.315: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 27 11:16:58.213: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 27 11:16:58.877: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 27 11:16:58.896: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:16:58.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720616, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:00.922: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:00.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720616, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:03.236: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:03.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720616, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:04.980: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:04.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720616, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:06.923: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:06.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:08.927: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:08.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:10.940: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:10.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:12.923: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:12.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:14.920: INFO: all replica sets need to contain the pod-template-hash label Jan 27 11:17:14.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:16.988: INFO: Jan 27 11:17:16.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715720613, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 11:17:18.930: INFO: Jan 27 11:17:18.930: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 27 11:17:18.946: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-frh27,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frh27/deployments/test-rollover-deployment,UID:854199f7-40f6-11ea-a994-fa163e34d433,ResourceVersion:19625235,Generation:2,CreationTimestamp:2020-01-27 11:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-27 11:16:53 +0000 UTC 2020-01-27 11:16:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-27 11:17:17 +0000 UTC 2020-01-27 11:16:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 27 11:17:18.951: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-frh27,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frh27/replicasets/test-rollover-deployment-5b8479fdb6,UID:86a32bda-40f6-11ea-a994-fa163e34d433,ResourceVersion:19625226,Generation:2,CreationTimestamp:2020-01-27 11:16:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 854199f7-40f6-11ea-a994-fa163e34d433 0xc001bc10c7 0xc001bc10c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 27 11:17:18.951: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 27 11:17:18.952: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-frh27,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frh27/replicasets/test-rollover-controller,UID:7e6aae24-40f6-11ea-a994-fa163e34d433,ResourceVersion:19625234,Generation:2,CreationTimestamp:2020-01-27 11:16:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 854199f7-40f6-11ea-a994-fa163e34d433 0xc001bc0f37 0xc001bc0f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 27 11:17:18.952: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-frh27,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frh27/replicasets/test-rollover-deployment-58494b7559,UID:854e9a86-40f6-11ea-a994-fa163e34d433,ResourceVersion:19625193,Generation:2,CreationTimestamp:2020-01-27 11:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 854199f7-40f6-11ea-a994-fa163e34d433 0xc001bc0ff7 0xc001bc0ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 27 11:17:18.958: INFO: Pod "test-rollover-deployment-5b8479fdb6-rqghq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-rqghq,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-frh27,SelfLink:/api/v1/namespaces/e2e-tests-deployment-frh27/pods/test-rollover-deployment-5b8479fdb6-rqghq,UID:870d8a61-40f6-11ea-a994-fa163e34d433,ResourceVersion:19625211,Generation:0,CreationTimestamp:2020-01-27 11:16:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 86a32bda-40f6-11ea-a994-fa163e34d433 0xc001cd2867 0xc001cd2868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-65qm5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-65qm5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-65qm5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cd28e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cd2900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:16:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:17:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:17:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:16:56 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-27 11:16:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-27 11:17:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4bf8dcb41e7496b30396d3dd8e6e5e719cebd2e03a444058cdae064455ab353f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:17:18.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-frh27" for this suite. Jan 27 11:17:27.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:17:27.173: INFO: namespace: e2e-tests-deployment-frh27, resource: bindings, ignored listing per whitelist Jan 27 11:17:27.193: INFO: namespace e2e-tests-deployment-frh27 deletion completed in 8.226220224s • [SLOW TEST:45.814 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:17:27.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0127 11:17:40.957719 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 11:17:40.957: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:17:40.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vj5c4" for this suite. Jan 27 11:17:47.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:17:47.367: INFO: namespace: e2e-tests-gc-vj5c4, resource: bindings, ignored listing per whitelist Jan 27 11:17:47.386: INFO: namespace e2e-tests-gc-vj5c4 deletion completed in 6.2103502s • [SLOW TEST:20.193 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:17:47.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-98sgj [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 27 11:17:47.671: INFO: Found 0 stateful pods, waiting for 3 Jan 27 11:17:57.683: INFO: Found 1 stateful pods, waiting for 3 Jan 27 11:18:07.691: INFO: Found 2 stateful pods, waiting for 3 Jan 27 11:18:17.681: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:18:17.681: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:18:17.681: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 27 11:18:27.688: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:18:27.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:18:27.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 27 11:18:27.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-98sgj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:18:28.356: INFO: stderr: "I0127 11:18:28.046189 882 log.go:172] (0xc000742370) (0xc000762640) Create stream\nI0127 11:18:28.046348 882 log.go:172] (0xc000742370) (0xc000762640) Stream added, broadcasting: 1\nI0127 11:18:28.052606 882 log.go:172] (0xc000742370) Reply frame received for 1\nI0127 11:18:28.052643 882 log.go:172] (0xc000742370) (0xc0005a8dc0) Create stream\nI0127 11:18:28.052652 882 log.go:172] (0xc000742370) (0xc0005a8dc0) Stream added, broadcasting: 3\nI0127 11:18:28.053763 882 log.go:172] (0xc000742370) Reply frame received for 3\nI0127 11:18:28.053793 882 log.go:172] (0xc000742370) (0xc00069c000) Create stream\nI0127 11:18:28.053833 882 log.go:172] (0xc000742370) (0xc00069c000) Stream added, broadcasting: 5\nI0127 11:18:28.054815 882 log.go:172] (0xc000742370) Reply frame received for 5\nI0127 11:18:28.229002 882 log.go:172] (0xc000742370) Data frame received for 3\nI0127 11:18:28.229051 882 log.go:172] (0xc0005a8dc0) (3) Data frame handling\nI0127 11:18:28.229089 882 log.go:172] (0xc0005a8dc0) (3) Data frame sent\nI0127 11:18:28.347867 882 log.go:172] (0xc000742370) (0xc0005a8dc0) Stream removed, broadcasting: 3\nI0127 11:18:28.347991 882 log.go:172] (0xc000742370) Data frame received for 1\nI0127 11:18:28.348000 882 log.go:172] (0xc000762640) (1) Data frame handling\nI0127 11:18:28.348010 882 log.go:172] (0xc000762640) (1) Data frame sent\nI0127 11:18:28.348024 882 log.go:172] (0xc000742370) (0xc00069c000) Stream removed, broadcasting: 5\nI0127 11:18:28.348079 882 log.go:172] (0xc000742370) (0xc000762640) Stream removed, broadcasting: 1\nI0127 11:18:28.348125 882 log.go:172] (0xc000742370) Go away received\nI0127 11:18:28.348656 882 log.go:172] (0xc000742370) (0xc000762640) Stream removed, broadcasting: 1\nI0127 11:18:28.348715 882 log.go:172] (0xc000742370) (0xc0005a8dc0) Stream removed, broadcasting: 3\nI0127 11:18:28.348733 882 log.go:172] (0xc000742370) (0xc00069c000) Stream removed, broadcasting: 5\n" Jan 27 11:18:28.357: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:18:28.357: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 27 11:18:38.484: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 27 11:18:48.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-98sgj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:18:49.218: INFO: stderr: "I0127 11:18:48.884282 904 log.go:172] (0xc000168840) (0xc0007c1400) Create stream\nI0127 11:18:48.884496 904 log.go:172] (0xc000168840) (0xc0007c1400) Stream added, broadcasting: 1\nI0127 11:18:48.893405 904 log.go:172] (0xc000168840) Reply frame received for 1\nI0127 11:18:48.893437 904 log.go:172] (0xc000168840) (0xc000518000) Create stream\nI0127 11:18:48.893444 904 log.go:172] (0xc000168840) (0xc000518000) Stream added, broadcasting: 3\nI0127 11:18:48.894539 904 log.go:172] (0xc000168840) Reply frame received for 3\nI0127 11:18:48.894580 904 log.go:172] (0xc000168840) (0xc000642000) Create stream\nI0127 11:18:48.894593 904 log.go:172] (0xc000168840) (0xc000642000) Stream added, broadcasting: 5\nI0127 11:18:48.895614 904 log.go:172] (0xc000168840) Reply frame received for 5\nI0127 11:18:49.064178 904 log.go:172] (0xc000168840) Data frame received for 3\nI0127 11:18:49.064225 904 log.go:172] (0xc000518000) (3) Data frame handling\nI0127 11:18:49.064253 904 log.go:172] (0xc000518000) (3) Data frame sent\nI0127 11:18:49.210980 904 log.go:172] (0xc000168840) Data frame received for 1\nI0127 11:18:49.211046 904 log.go:172] (0xc0007c1400) (1) Data frame handling\nI0127 11:18:49.211070 904 log.go:172] (0xc0007c1400) (1) Data frame sent\nI0127 11:18:49.211091 904 log.go:172] (0xc000168840) (0xc0007c1400) Stream removed, broadcasting: 1\nI0127 11:18:49.211905 904 log.go:172] (0xc000168840) (0xc000518000) Stream removed, broadcasting: 3\nI0127 11:18:49.211931 904 log.go:172] (0xc000168840) (0xc000642000) Stream removed, broadcasting: 5\nI0127 11:18:49.211951 904 log.go:172] (0xc000168840) (0xc0007c1400) Stream removed, broadcasting: 1\nI0127 11:18:49.211958 904 log.go:172] (0xc000168840) (0xc000518000) Stream removed, broadcasting: 3\nI0127 11:18:49.211964 904 log.go:172] (0xc000168840) (0xc000642000) Stream removed, broadcasting: 5\n" Jan 27 11:18:49.218: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 11:18:49.218: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 11:18:59.279: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:18:59.280: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:18:59.280: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:18:59.280: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:10.093: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:19:10.093: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:10.093: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:19.308: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:19:19.308: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:19.308: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:29.310: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:19:29.310: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 27 11:19:39.445: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update STEP: Rolling back to a previous revision Jan 27 11:19:49.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-98sgj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 27 11:19:49.892: INFO: stderr: "I0127 11:19:49.498002 926 log.go:172] (0xc000720370) (0xc000742640) Create stream\nI0127 11:19:49.498304 926 log.go:172] (0xc000720370) (0xc000742640) Stream added, broadcasting: 1\nI0127 11:19:49.505009 926 log.go:172] (0xc000720370) Reply frame received for 1\nI0127 11:19:49.505049 926 log.go:172] (0xc000720370) (0xc0007426e0) Create stream\nI0127 11:19:49.505058 926 log.go:172] (0xc000720370) (0xc0007426e0) Stream added, broadcasting: 3\nI0127 11:19:49.506348 926 log.go:172] (0xc000720370) Reply frame received for 3\nI0127 11:19:49.506381 926 log.go:172] (0xc000720370) (0xc000742780) Create stream\nI0127 11:19:49.506391 926 log.go:172] (0xc000720370) (0xc000742780) Stream added, broadcasting: 5\nI0127 11:19:49.507520 926 log.go:172] (0xc000720370) Reply frame received for 5\nI0127 11:19:49.749214 926 log.go:172] (0xc000720370) Data frame received for 3\nI0127 11:19:49.749264 926 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0127 11:19:49.749289 926 log.go:172] (0xc0007426e0) (3) Data frame sent\nI0127 11:19:49.883375 926 log.go:172] (0xc000720370) Data frame received for 1\nI0127 11:19:49.883471 926 log.go:172] (0xc000742640) (1) Data frame handling\nI0127 11:19:49.883485 926 log.go:172] (0xc000742640) (1) Data frame sent\nI0127 11:19:49.884810 926 log.go:172] (0xc000720370) (0xc0007426e0) Stream removed, broadcasting: 3\nI0127 11:19:49.884840 926 log.go:172] (0xc000720370) (0xc000742640) Stream removed, broadcasting: 1\nI0127 11:19:49.885002 926 log.go:172] (0xc000720370) (0xc000742780) Stream removed, broadcasting: 5\nI0127 11:19:49.885032 926 log.go:172] (0xc000720370) (0xc000742640) Stream removed, broadcasting: 1\nI0127 11:19:49.885043 926 log.go:172] (0xc000720370) (0xc0007426e0) Stream removed, broadcasting: 3\nI0127 11:19:49.885052 926 log.go:172] (0xc000720370) (0xc000742780) Stream removed, broadcasting: 5\nI0127 11:19:49.885062 926 log.go:172] (0xc000720370) Go away received\n" Jan 27 11:19:49.892: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 27 11:19:49.892: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 27 11:20:00.028: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 27 11:20:10.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-98sgj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 27 11:20:10.963: INFO: stderr: "I0127 11:20:10.451695 949 log.go:172] (0xc000704370) (0xc0007a6640) Create stream\nI0127 11:20:10.451964 949 log.go:172] (0xc000704370) (0xc0007a6640) Stream added, broadcasting: 1\nI0127 11:20:10.461190 949 log.go:172] (0xc000704370) Reply frame received for 1\nI0127 11:20:10.461276 949 log.go:172] (0xc000704370) (0xc0005bee60) Create stream\nI0127 11:20:10.461298 949 log.go:172] (0xc000704370) (0xc0005bee60) Stream added, broadcasting: 3\nI0127 11:20:10.464286 949 log.go:172] (0xc000704370) Reply frame received for 3\nI0127 11:20:10.464417 949 log.go:172] (0xc000704370) (0xc0007b6000) Create stream\nI0127 11:20:10.464504 949 log.go:172] (0xc000704370) (0xc0007b6000) Stream added, broadcasting: 5\nI0127 11:20:10.466427 949 log.go:172] (0xc000704370) Reply frame received for 5\nI0127 11:20:10.772723 949 log.go:172] (0xc000704370) Data frame received for 3\nI0127 11:20:10.772785 949 log.go:172] (0xc0005bee60) (3) Data frame handling\nI0127 11:20:10.772800 949 log.go:172] (0xc0005bee60) (3) Data frame sent\nI0127 11:20:10.954371 949 log.go:172] (0xc000704370) (0xc0005bee60) Stream removed, broadcasting: 3\nI0127 11:20:10.954511 949 log.go:172] (0xc000704370) Data frame received for 1\nI0127 11:20:10.954534 949 log.go:172] (0xc0007a6640) (1) Data frame handling\nI0127 11:20:10.954585 949 log.go:172] (0xc0007a6640) (1) Data frame sent\nI0127 11:20:10.954599 949 log.go:172] (0xc000704370) (0xc0007a6640) Stream removed, broadcasting: 1\nI0127 11:20:10.954763 949 log.go:172] (0xc000704370) (0xc0007b6000) Stream removed, broadcasting: 5\nI0127 11:20:10.955038 949 log.go:172] (0xc000704370) Go away received\nI0127 11:20:10.955456 949 log.go:172] (0xc000704370) (0xc0007a6640) Stream removed, broadcasting: 1\nI0127 11:20:10.955538 949 log.go:172] (0xc000704370) (0xc0005bee60) Stream removed, broadcasting: 3\nI0127 11:20:10.955557 949 log.go:172] (0xc000704370) (0xc0007b6000) Stream removed, broadcasting: 5\n" Jan 27 11:20:10.963: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 27 11:20:10.963: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 27 11:20:21.015: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:20:21.015: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:21.015: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:21.015: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:31.095: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:20:31.095: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:31.095: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:41.047: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:20:41.047: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:41.047: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:20:51.045: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update Jan 27 11:20:51.045: INFO: Waiting for Pod e2e-tests-statefulset-98sgj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 27 11:21:01.296: INFO: Waiting for StatefulSet e2e-tests-statefulset-98sgj/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 27 11:21:11.036: INFO: Deleting all statefulset in ns e2e-tests-statefulset-98sgj Jan 27 11:21:11.045: INFO: Scaling statefulset ss2 to 0 Jan 27 11:21:31.119: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 11:21:31.129: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:21:31.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-98sgj" for this suite. Jan 27 11:21:39.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:21:39.578: INFO: namespace: e2e-tests-statefulset-98sgj, resource: bindings, ignored listing per whitelist Jan 27 11:21:39.580: INFO: namespace e2e-tests-statefulset-98sgj deletion completed in 8.261953973s • [SLOW TEST:232.193 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:21:39.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:21:39.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-npm6x" to be "success or failure" Jan 27 11:21:39.780: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.14954ms Jan 27 11:21:42.068: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304776332s Jan 27 11:21:44.086: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322713213s Jan 27 11:21:46.099: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335845974s Jan 27 11:21:48.426: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662568108s Jan 27 11:21:50.470: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.706407447s Jan 27 11:21:52.493: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.730050836s STEP: Saw pod success Jan 27 11:21:52.493: INFO: Pod "downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:21:52.501: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:21:52.645: INFO: Waiting for pod downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:21:52.650: INFO: Pod downwardapi-volume-302b500a-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:21:52.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-npm6x" for this suite. Jan 27 11:21:58.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:21:59.087: INFO: namespace: e2e-tests-projected-npm6x, resource: bindings, ignored listing per whitelist Jan 27 11:21:59.087: INFO: namespace e2e-tests-projected-npm6x deletion completed in 6.430722224s • [SLOW TEST:19.507 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:21:59.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-3bc4e006-40f7-11ea-b652-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-3bc4dfe7-40f7-11ea-b652-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 27 11:21:59.309: INFO: Waiting up to 5m0s for pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-c2dcf" to be "success or failure" Jan 27 11:21:59.327: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.426882ms Jan 27 11:22:01.336: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027479291s Jan 27 11:22:03.367: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058468915s Jan 27 11:22:05.457: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148465375s Jan 27 11:22:07.566: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257335527s Jan 27 11:22:09.574: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265714176s Jan 27 11:22:11.593: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.284250855s STEP: Saw pod success Jan 27 11:22:11.593: INFO: Pod "projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:22:11.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005 container projected-all-volume-test: STEP: delete the pod Jan 27 11:22:12.337: INFO: Waiting for pod projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:22:12.352: INFO: Pod projected-volume-3bc4dfa1-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:22:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c2dcf" for this suite. Jan 27 11:22:18.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:22:18.637: INFO: namespace: e2e-tests-projected-c2dcf, resource: bindings, ignored listing per whitelist Jan 27 11:22:18.746: INFO: namespace e2e-tests-projected-c2dcf deletion completed in 6.355925478s • [SLOW TEST:19.659 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:22:18.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:22:18.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-m9dj7" to be "success or failure" Jan 27 11:22:19.080: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.746363ms Jan 27 11:22:21.256: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298111987s Jan 27 11:22:23.285: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327340934s Jan 27 11:22:25.691: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.732898118s Jan 27 11:22:27.918: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960071818s Jan 27 11:22:29.936: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.978218589s STEP: Saw pod success Jan 27 11:22:29.936: INFO: Pod "downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:22:29.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:22:30.447: INFO: Waiting for pod downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:22:30.632: INFO: Pod downwardapi-volume-47892e99-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:22:30.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m9dj7" for this suite. Jan 27 11:22:36.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:22:36.982: INFO: namespace: e2e-tests-projected-m9dj7, resource: bindings, ignored listing per whitelist Jan 27 11:22:37.000: INFO: namespace e2e-tests-projected-m9dj7 deletion completed in 6.35823973s • [SLOW TEST:18.253 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:22:37.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:22:47.477: INFO: Waiting up to 5m0s for pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-pods-t2hwv" to be "success or failure" Jan 27 11:22:47.501: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.9941ms Jan 27 11:22:49.676: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199032335s Jan 27 11:22:51.714: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2366021s Jan 27 11:22:53.968: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.491423509s Jan 27 11:22:55.981: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503653586s Jan 27 11:22:58.020: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.542439393s STEP: Saw pod success Jan 27 11:22:58.020: INFO: Pod "client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:22:58.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005 container env3cont: STEP: delete the pod Jan 27 11:22:58.190: INFO: Waiting for pod client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:22:58.211: INFO: Pod client-envvars-5885c7f1-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:22:58.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t2hwv" for this suite. Jan 27 11:23:44.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:23:44.472: INFO: namespace: e2e-tests-pods-t2hwv, resource: bindings, ignored listing per whitelist Jan 27 11:23:44.711: INFO: namespace e2e-tests-pods-t2hwv deletion completed in 46.45538527s • [SLOW TEST:67.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:23:44.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:23:44.920: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 27 11:23:44.929: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kttsq/daemonsets","resourceVersion":"19626238"},"items":null} Jan 27 11:23:44.933: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kttsq/pods","resourceVersion":"19626238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:23:44.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kttsq" for this suite. Jan 27 11:23:51.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:23:51.079: INFO: namespace: e2e-tests-daemonsets-kttsq, resource: bindings, ignored listing per whitelist Jan 27 11:23:51.120: INFO: namespace e2e-tests-daemonsets-kttsq deletion completed in 6.145070649s S [SKIPPING] [6.409 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:23:44.920: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:23:51.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:23:57.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-tzg7k" for this suite. Jan 27 11:24:03.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:24:03.862: INFO: namespace: e2e-tests-namespaces-tzg7k, resource: bindings, ignored listing per whitelist Jan 27 11:24:03.916: INFO: namespace e2e-tests-namespaces-tzg7k deletion completed in 6.307944752s STEP: Destroying namespace "e2e-tests-nsdeletetest-8bsww" for this suite. Jan 27 11:24:03.920: INFO: Namespace e2e-tests-nsdeletetest-8bsww was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-gzf8s" for this suite. Jan 27 11:24:11.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:24:12.065: INFO: namespace: e2e-tests-nsdeletetest-gzf8s, resource: bindings, ignored listing per whitelist Jan 27 11:24:12.076: INFO: namespace e2e-tests-nsdeletetest-gzf8s deletion completed in 8.156339719s • [SLOW TEST:20.956 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:24:12.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8b1451ac-40f7-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 11:24:12.284: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-f27n2" to be "success or failure" Jan 27 11:24:12.370: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.388533ms Jan 27 11:24:14.829: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544750973s Jan 27 11:24:16.845: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.560694163s Jan 27 11:24:18.871: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587032171s Jan 27 11:24:20.885: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60076014s Jan 27 11:24:22.900: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.616477545s STEP: Saw pod success Jan 27 11:24:22.900: INFO: Pod "pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:24:22.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 27 11:24:23.694: INFO: Waiting for pod pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:24:23.727: INFO: Pod pod-projected-configmaps-8b152a76-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:24:23.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f27n2" for this suite. Jan 27 11:24:29.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:24:30.016: INFO: namespace: e2e-tests-projected-f27n2, resource: bindings, ignored listing per whitelist Jan 27 11:24:30.047: INFO: namespace e2e-tests-projected-f27n2 deletion completed in 6.251904751s • [SLOW TEST:17.971 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:24:30.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 27 11:24:30.230: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 27 11:24:30.245: INFO: Waiting for terminating namespaces to be deleted... Jan 27 11:24:30.249: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 27 11:24:30.263: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 27 11:24:30.263: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 27 11:24:30.263: INFO: Container coredns ready: true, restart count 0 Jan 27 11:24:30.263: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 27 11:24:30.263: INFO: Container kube-proxy ready: true, restart count 0 Jan 27 11:24:30.263: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 27 11:24:30.263: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 27 11:24:30.263: INFO: Container weave ready: true, restart count 0 Jan 27 11:24:30.263: INFO: Container weave-npc ready: true, restart count 0 Jan 27 11:24:30.263: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 27 11:24:30.263: INFO: Container coredns ready: true, restart count 0 Jan 27 11:24:30.263: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 27 11:24:30.263: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15edbaa6e79aeb79], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:24:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-srzb5" for this suite. Jan 27 11:24:37.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:24:37.490: INFO: namespace: e2e-tests-sched-pred-srzb5, resource: bindings, ignored listing per whitelist Jan 27 11:24:37.587: INFO: namespace e2e-tests-sched-pred-srzb5 deletion completed in 6.269938751s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.539 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:24:37.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:24:37.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-mh69g" to be "success or failure" Jan 27 11:24:37.720: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.760489ms Jan 27 11:24:39.739: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028748448s Jan 27 11:24:41.756: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046185364s Jan 27 11:24:43.999: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288497591s Jan 27 11:24:46.011: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30088139s Jan 27 11:24:48.024: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.314255958s STEP: Saw pod success Jan 27 11:24:48.024: INFO: Pod "downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:24:48.037: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:24:48.170: INFO: Waiting for pod downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005 to disappear Jan 27 11:24:48.370: INFO: Pod downwardapi-volume-9a3c2fdf-40f7-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:24:48.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mh69g" for this suite. Jan 27 11:24:55.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:24:55.237: INFO: namespace: e2e-tests-projected-mh69g, resource: bindings, ignored listing per whitelist Jan 27 11:24:55.364: INFO: namespace e2e-tests-projected-mh69g deletion completed in 6.958766176s • [SLOW TEST:17.776 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:24:55.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:24:55.585: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.575604ms)
Jan 27 11:24:55.591: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.70568ms)
Jan 27 11:24:55.595: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.700252ms)
Jan 27 11:24:55.600: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.974991ms)
Jan 27 11:24:55.605: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.301451ms)
Jan 27 11:24:55.611: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.243024ms)
Jan 27 11:24:55.617: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.13249ms)
Jan 27 11:24:55.622: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.507452ms)
Jan 27 11:24:55.626: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.719138ms)
Jan 27 11:24:55.631: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.643967ms)
Jan 27 11:24:55.635: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.020746ms)
Jan 27 11:24:55.640: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.823254ms)
Jan 27 11:24:55.645: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.679852ms)
Jan 27 11:24:55.690: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 45.655907ms)
Jan 27 11:24:55.697: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.180317ms)
Jan 27 11:24:55.702: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.452996ms)
Jan 27 11:24:55.707: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.173158ms)
Jan 27 11:24:55.714: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.035662ms)
Jan 27 11:24:55.722: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.40532ms)
Jan 27 11:24:55.728: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.369347ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:24:55.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-k7hch" for this suite.
Jan 27 11:25:01.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:25:01.910: INFO: namespace: e2e-tests-proxy-k7hch, resource: bindings, ignored listing per whitelist
Jan 27 11:25:01.952: INFO: namespace e2e-tests-proxy-k7hch deletion completed in 6.212456235s

• [SLOW TEST:6.588 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:25:01.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 11:25:02.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-nqq46" to be "success or failure"
Jan 27 11:25:02.223: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.301637ms
Jan 27 11:25:04.244: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042164395s
Jan 27 11:25:06.254: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05231694s
Jan 27 11:25:08.384: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18220878s
Jan 27 11:25:10.400: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198065748s
Jan 27 11:25:12.494: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292482495s
STEP: Saw pod success
Jan 27 11:25:12.494: INFO: Pod "downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:25:12.532: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 11:25:13.115: INFO: Waiting for pod downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005 to disappear
Jan 27 11:25:13.151: INFO: Pod downwardapi-volume-a8d36bb1-40f7-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:25:13.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nqq46" for this suite.
Jan 27 11:25:19.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:25:19.530: INFO: namespace: e2e-tests-downward-api-nqq46, resource: bindings, ignored listing per whitelist
Jan 27 11:25:19.639: INFO: namespace e2e-tests-downward-api-nqq46 deletion completed in 6.389938584s

• [SLOW TEST:17.686 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:25:19.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-b358d175-40f7-11ea-b652-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-b358d1f3-40f7-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b358d175-40f7-11ea-b652-0242ac110005
STEP: Updating configmap cm-test-opt-upd-b358d1f3-40f7-11ea-b652-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-b358d218-40f7-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:25:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-snss5" for this suite.
Jan 27 11:26:06.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:26:06.731: INFO: namespace: e2e-tests-projected-snss5, resource: bindings, ignored listing per whitelist
Jan 27 11:26:06.762: INFO: namespace e2e-tests-projected-snss5 deletion completed in 24.269678928s

• [SLOW TEST:47.122 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:26:06.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 27 11:26:06.943: INFO: Waiting up to 5m0s for pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-4xfkn" to be "success or failure"
Jan 27 11:26:06.964: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.985476ms
Jan 27 11:26:08.980: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036612691s
Jan 27 11:26:11.034: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090823814s
Jan 27 11:26:13.642: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69869989s
Jan 27 11:26:15.659: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716006641s
Jan 27 11:26:17.679: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.735818159s
STEP: Saw pod success
Jan 27 11:26:17.679: INFO: Pod "pod-cf6349c8-40f7-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:26:17.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cf6349c8-40f7-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:26:18.459: INFO: Waiting for pod pod-cf6349c8-40f7-11ea-b652-0242ac110005 to disappear
Jan 27 11:26:18.750: INFO: Pod pod-cf6349c8-40f7-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:26:18.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4xfkn" for this suite.
Jan 27 11:26:24.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:26:24.892: INFO: namespace: e2e-tests-emptydir-4xfkn, resource: bindings, ignored listing per whitelist
Jan 27 11:26:25.032: INFO: namespace e2e-tests-emptydir-4xfkn deletion completed in 6.245945284s

• [SLOW TEST:18.270 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:26:25.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 27 11:26:51.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:51.474: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:51.567376       8 log.go:172] (0xc000405810) (0xc000412500) Create stream
I0127 11:26:51.567604       8 log.go:172] (0xc000405810) (0xc000412500) Stream added, broadcasting: 1
I0127 11:26:51.573825       8 log.go:172] (0xc000405810) Reply frame received for 1
I0127 11:26:51.573876       8 log.go:172] (0xc000405810) (0xc00049a960) Create stream
I0127 11:26:51.573891       8 log.go:172] (0xc000405810) (0xc00049a960) Stream added, broadcasting: 3
I0127 11:26:51.575593       8 log.go:172] (0xc000405810) Reply frame received for 3
I0127 11:26:51.575625       8 log.go:172] (0xc000405810) (0xc00049abe0) Create stream
I0127 11:26:51.575637       8 log.go:172] (0xc000405810) (0xc00049abe0) Stream added, broadcasting: 5
I0127 11:26:51.577286       8 log.go:172] (0xc000405810) Reply frame received for 5
I0127 11:26:51.751921       8 log.go:172] (0xc000405810) Data frame received for 3
I0127 11:26:51.752055       8 log.go:172] (0xc00049a960) (3) Data frame handling
I0127 11:26:51.752098       8 log.go:172] (0xc00049a960) (3) Data frame sent
I0127 11:26:51.917992       8 log.go:172] (0xc000405810) Data frame received for 1
I0127 11:26:51.918221       8 log.go:172] (0xc000412500) (1) Data frame handling
I0127 11:26:51.918283       8 log.go:172] (0xc000412500) (1) Data frame sent
I0127 11:26:51.918316       8 log.go:172] (0xc000405810) (0xc000412500) Stream removed, broadcasting: 1
I0127 11:26:51.918716       8 log.go:172] (0xc000405810) (0xc00049a960) Stream removed, broadcasting: 3
I0127 11:26:51.919474       8 log.go:172] (0xc000405810) (0xc00049abe0) Stream removed, broadcasting: 5
I0127 11:26:51.919509       8 log.go:172] (0xc000405810) Go away received
I0127 11:26:51.919561       8 log.go:172] (0xc000405810) (0xc000412500) Stream removed, broadcasting: 1
I0127 11:26:51.919584       8 log.go:172] (0xc000405810) (0xc00049a960) Stream removed, broadcasting: 3
I0127 11:26:51.919600       8 log.go:172] (0xc000405810) (0xc00049abe0) Stream removed, broadcasting: 5
Jan 27 11:26:51.919: INFO: Exec stderr: ""
Jan 27 11:26:51.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:51.919: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:51.994647       8 log.go:172] (0xc000907e40) (0xc000d332c0) Create stream
I0127 11:26:51.994812       8 log.go:172] (0xc000907e40) (0xc000d332c0) Stream added, broadcasting: 1
I0127 11:26:51.998739       8 log.go:172] (0xc000907e40) Reply frame received for 1
I0127 11:26:51.998771       8 log.go:172] (0xc000907e40) (0xc000412b40) Create stream
I0127 11:26:51.998778       8 log.go:172] (0xc000907e40) (0xc000412b40) Stream added, broadcasting: 3
I0127 11:26:51.999863       8 log.go:172] (0xc000907e40) Reply frame received for 3
I0127 11:26:51.999883       8 log.go:172] (0xc000907e40) (0xc00049b360) Create stream
I0127 11:26:51.999891       8 log.go:172] (0xc000907e40) (0xc00049b360) Stream added, broadcasting: 5
I0127 11:26:52.000959       8 log.go:172] (0xc000907e40) Reply frame received for 5
I0127 11:26:52.123892       8 log.go:172] (0xc000907e40) Data frame received for 3
I0127 11:26:52.124010       8 log.go:172] (0xc000412b40) (3) Data frame handling
I0127 11:26:52.124040       8 log.go:172] (0xc000412b40) (3) Data frame sent
I0127 11:26:52.283871       8 log.go:172] (0xc000907e40) (0xc000412b40) Stream removed, broadcasting: 3
I0127 11:26:52.284016       8 log.go:172] (0xc000907e40) Data frame received for 1
I0127 11:26:52.284046       8 log.go:172] (0xc000d332c0) (1) Data frame handling
I0127 11:26:52.284090       8 log.go:172] (0xc000d332c0) (1) Data frame sent
I0127 11:26:52.284117       8 log.go:172] (0xc000907e40) (0xc000d332c0) Stream removed, broadcasting: 1
I0127 11:26:52.284169       8 log.go:172] (0xc000907e40) (0xc00049b360) Stream removed, broadcasting: 5
I0127 11:26:52.284186       8 log.go:172] (0xc000907e40) Go away received
I0127 11:26:52.284457       8 log.go:172] (0xc000907e40) (0xc000d332c0) Stream removed, broadcasting: 1
I0127 11:26:52.284549       8 log.go:172] (0xc000907e40) (0xc000412b40) Stream removed, broadcasting: 3
I0127 11:26:52.284558       8 log.go:172] (0xc000907e40) (0xc00049b360) Stream removed, broadcasting: 5
Jan 27 11:26:52.284: INFO: Exec stderr: ""
Jan 27 11:26:52.284: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:52.284: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:52.375931       8 log.go:172] (0xc0014b22c0) (0xc001bb0c80) Create stream
I0127 11:26:52.376000       8 log.go:172] (0xc0014b22c0) (0xc001bb0c80) Stream added, broadcasting: 1
I0127 11:26:52.388457       8 log.go:172] (0xc0014b22c0) Reply frame received for 1
I0127 11:26:52.388522       8 log.go:172] (0xc0014b22c0) (0xc001bb0d20) Create stream
I0127 11:26:52.388535       8 log.go:172] (0xc0014b22c0) (0xc001bb0d20) Stream added, broadcasting: 3
I0127 11:26:52.389944       8 log.go:172] (0xc0014b22c0) Reply frame received for 3
I0127 11:26:52.389967       8 log.go:172] (0xc0014b22c0) (0xc000d33360) Create stream
I0127 11:26:52.389977       8 log.go:172] (0xc0014b22c0) (0xc000d33360) Stream added, broadcasting: 5
I0127 11:26:52.390935       8 log.go:172] (0xc0014b22c0) Reply frame received for 5
I0127 11:26:52.791070       8 log.go:172] (0xc0014b22c0) Data frame received for 3
I0127 11:26:52.791149       8 log.go:172] (0xc001bb0d20) (3) Data frame handling
I0127 11:26:52.791174       8 log.go:172] (0xc001bb0d20) (3) Data frame sent
I0127 11:26:53.004734       8 log.go:172] (0xc0014b22c0) (0xc001bb0d20) Stream removed, broadcasting: 3
I0127 11:26:53.004840       8 log.go:172] (0xc0014b22c0) Data frame received for 1
I0127 11:26:53.004857       8 log.go:172] (0xc001bb0c80) (1) Data frame handling
I0127 11:26:53.004868       8 log.go:172] (0xc001bb0c80) (1) Data frame sent
I0127 11:26:53.004877       8 log.go:172] (0xc0014b22c0) (0xc001bb0c80) Stream removed, broadcasting: 1
I0127 11:26:53.004898       8 log.go:172] (0xc0014b22c0) (0xc000d33360) Stream removed, broadcasting: 5
I0127 11:26:53.004927       8 log.go:172] (0xc0014b22c0) Go away received
I0127 11:26:53.005066       8 log.go:172] (0xc0014b22c0) (0xc001bb0c80) Stream removed, broadcasting: 1
I0127 11:26:53.005091       8 log.go:172] (0xc0014b22c0) (0xc001bb0d20) Stream removed, broadcasting: 3
I0127 11:26:53.005118       8 log.go:172] (0xc0014b22c0) (0xc000d33360) Stream removed, broadcasting: 5
Jan 27 11:26:53.005: INFO: Exec stderr: ""
Jan 27 11:26:53.005: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:53.005: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:53.069267       8 log.go:172] (0xc000405d90) (0xc000413180) Create stream
I0127 11:26:53.069322       8 log.go:172] (0xc000405d90) (0xc000413180) Stream added, broadcasting: 1
I0127 11:26:53.074770       8 log.go:172] (0xc000405d90) Reply frame received for 1
I0127 11:26:53.074817       8 log.go:172] (0xc000405d90) (0xc000d33540) Create stream
I0127 11:26:53.074829       8 log.go:172] (0xc000405d90) (0xc000d33540) Stream added, broadcasting: 3
I0127 11:26:53.075759       8 log.go:172] (0xc000405d90) Reply frame received for 3
I0127 11:26:53.075779       8 log.go:172] (0xc000405d90) (0xc000d335e0) Create stream
I0127 11:26:53.075788       8 log.go:172] (0xc000405d90) (0xc000d335e0) Stream added, broadcasting: 5
I0127 11:26:53.078430       8 log.go:172] (0xc000405d90) Reply frame received for 5
I0127 11:26:53.195486       8 log.go:172] (0xc000405d90) Data frame received for 3
I0127 11:26:53.195522       8 log.go:172] (0xc000d33540) (3) Data frame handling
I0127 11:26:53.195538       8 log.go:172] (0xc000d33540) (3) Data frame sent
I0127 11:26:53.303901       8 log.go:172] (0xc000405d90) (0xc000d33540) Stream removed, broadcasting: 3
I0127 11:26:53.304006       8 log.go:172] (0xc000405d90) Data frame received for 1
I0127 11:26:53.304040       8 log.go:172] (0xc000405d90) (0xc000d335e0) Stream removed, broadcasting: 5
I0127 11:26:53.304084       8 log.go:172] (0xc000413180) (1) Data frame handling
I0127 11:26:53.304100       8 log.go:172] (0xc000413180) (1) Data frame sent
I0127 11:26:53.304106       8 log.go:172] (0xc000405d90) (0xc000413180) Stream removed, broadcasting: 1
I0127 11:26:53.304116       8 log.go:172] (0xc000405d90) Go away received
I0127 11:26:53.304335       8 log.go:172] (0xc000405d90) (0xc000413180) Stream removed, broadcasting: 1
I0127 11:26:53.304352       8 log.go:172] (0xc000405d90) (0xc000d33540) Stream removed, broadcasting: 3
I0127 11:26:53.304366       8 log.go:172] (0xc000405d90) (0xc000d335e0) Stream removed, broadcasting: 5
Jan 27 11:26:53.304: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 27 11:26:53.304: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:53.304: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:53.399912       8 log.go:172] (0xc0016f8420) (0xc000d33900) Create stream
I0127 11:26:53.399948       8 log.go:172] (0xc0016f8420) (0xc000d33900) Stream added, broadcasting: 1
I0127 11:26:53.404466       8 log.go:172] (0xc0016f8420) Reply frame received for 1
I0127 11:26:53.404493       8 log.go:172] (0xc0016f8420) (0xc001a93ae0) Create stream
I0127 11:26:53.404503       8 log.go:172] (0xc0016f8420) (0xc001a93ae0) Stream added, broadcasting: 3
I0127 11:26:53.405914       8 log.go:172] (0xc0016f8420) Reply frame received for 3
I0127 11:26:53.405939       8 log.go:172] (0xc0016f8420) (0xc001c50960) Create stream
I0127 11:26:53.405955       8 log.go:172] (0xc0016f8420) (0xc001c50960) Stream added, broadcasting: 5
I0127 11:26:53.407690       8 log.go:172] (0xc0016f8420) Reply frame received for 5
I0127 11:26:53.504805       8 log.go:172] (0xc0016f8420) Data frame received for 3
I0127 11:26:53.504851       8 log.go:172] (0xc001a93ae0) (3) Data frame handling
I0127 11:26:53.504873       8 log.go:172] (0xc001a93ae0) (3) Data frame sent
I0127 11:26:53.626168       8 log.go:172] (0xc0016f8420) Data frame received for 1
I0127 11:26:53.626211       8 log.go:172] (0xc000d33900) (1) Data frame handling
I0127 11:26:53.626232       8 log.go:172] (0xc000d33900) (1) Data frame sent
I0127 11:26:53.627257       8 log.go:172] (0xc0016f8420) (0xc000d33900) Stream removed, broadcasting: 1
I0127 11:26:53.628245       8 log.go:172] (0xc0016f8420) (0xc001a93ae0) Stream removed, broadcasting: 3
I0127 11:26:53.628655       8 log.go:172] (0xc0016f8420) (0xc001c50960) Stream removed, broadcasting: 5
I0127 11:26:53.628717       8 log.go:172] (0xc0016f8420) (0xc000d33900) Stream removed, broadcasting: 1
I0127 11:26:53.628732       8 log.go:172] (0xc0016f8420) (0xc001a93ae0) Stream removed, broadcasting: 3
I0127 11:26:53.628747       8 log.go:172] (0xc0016f8420) (0xc001c50960) Stream removed, broadcasting: 5
I0127 11:26:53.628964       8 log.go:172] (0xc0016f8420) Go away received
Jan 27 11:26:53.629: INFO: Exec stderr: ""
Jan 27 11:26:53.629: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:53.629: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:53.692333       8 log.go:172] (0xc0016f88f0) (0xc000d33b80) Create stream
I0127 11:26:53.692367       8 log.go:172] (0xc0016f88f0) (0xc000d33b80) Stream added, broadcasting: 1
I0127 11:26:53.696712       8 log.go:172] (0xc0016f88f0) Reply frame received for 1
I0127 11:26:53.696740       8 log.go:172] (0xc0016f88f0) (0xc001c50a00) Create stream
I0127 11:26:53.696751       8 log.go:172] (0xc0016f88f0) (0xc001c50a00) Stream added, broadcasting: 3
I0127 11:26:53.698084       8 log.go:172] (0xc0016f88f0) Reply frame received for 3
I0127 11:26:53.698118       8 log.go:172] (0xc0016f88f0) (0xc001a93b80) Create stream
I0127 11:26:53.698135       8 log.go:172] (0xc0016f88f0) (0xc001a93b80) Stream added, broadcasting: 5
I0127 11:26:53.699304       8 log.go:172] (0xc0016f88f0) Reply frame received for 5
I0127 11:26:53.798785       8 log.go:172] (0xc0016f88f0) Data frame received for 3
I0127 11:26:53.798837       8 log.go:172] (0xc001c50a00) (3) Data frame handling
I0127 11:26:53.798855       8 log.go:172] (0xc001c50a00) (3) Data frame sent
I0127 11:26:53.969179       8 log.go:172] (0xc0016f88f0) (0xc001c50a00) Stream removed, broadcasting: 3
I0127 11:26:53.969256       8 log.go:172] (0xc0016f88f0) Data frame received for 1
I0127 11:26:53.969266       8 log.go:172] (0xc000d33b80) (1) Data frame handling
I0127 11:26:53.969272       8 log.go:172] (0xc000d33b80) (1) Data frame sent
I0127 11:26:53.969350       8 log.go:172] (0xc0016f88f0) (0xc000d33b80) Stream removed, broadcasting: 1
I0127 11:26:53.969382       8 log.go:172] (0xc0016f88f0) (0xc001a93b80) Stream removed, broadcasting: 5
I0127 11:26:53.969401       8 log.go:172] (0xc0016f88f0) Go away received
I0127 11:26:53.969523       8 log.go:172] (0xc0016f88f0) (0xc000d33b80) Stream removed, broadcasting: 1
I0127 11:26:53.969536       8 log.go:172] (0xc0016f88f0) (0xc001c50a00) Stream removed, broadcasting: 3
I0127 11:26:53.969544       8 log.go:172] (0xc0016f88f0) (0xc001a93b80) Stream removed, broadcasting: 5
Jan 27 11:26:53.969: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 27 11:26:53.969: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:53.969: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:54.046051       8 log.go:172] (0xc001caa2c0) (0xc000e763c0) Create stream
I0127 11:26:54.046081       8 log.go:172] (0xc001caa2c0) (0xc000e763c0) Stream added, broadcasting: 1
I0127 11:26:54.049708       8 log.go:172] (0xc001caa2c0) Reply frame received for 1
I0127 11:26:54.049745       8 log.go:172] (0xc001caa2c0) (0xc000e76500) Create stream
I0127 11:26:54.049758       8 log.go:172] (0xc001caa2c0) (0xc000e76500) Stream added, broadcasting: 3
I0127 11:26:54.050967       8 log.go:172] (0xc001caa2c0) Reply frame received for 3
I0127 11:26:54.051002       8 log.go:172] (0xc001caa2c0) (0xc000e765a0) Create stream
I0127 11:26:54.051016       8 log.go:172] (0xc001caa2c0) (0xc000e765a0) Stream added, broadcasting: 5
I0127 11:26:54.052005       8 log.go:172] (0xc001caa2c0) Reply frame received for 5
I0127 11:26:54.152448       8 log.go:172] (0xc001caa2c0) Data frame received for 3
I0127 11:26:54.152503       8 log.go:172] (0xc000e76500) (3) Data frame handling
I0127 11:26:54.152530       8 log.go:172] (0xc000e76500) (3) Data frame sent
I0127 11:26:54.273037       8 log.go:172] (0xc001caa2c0) (0xc000e76500) Stream removed, broadcasting: 3
I0127 11:26:54.273131       8 log.go:172] (0xc001caa2c0) Data frame received for 1
I0127 11:26:54.273179       8 log.go:172] (0xc000e763c0) (1) Data frame handling
I0127 11:26:54.273204       8 log.go:172] (0xc000e763c0) (1) Data frame sent
I0127 11:26:54.273257       8 log.go:172] (0xc001caa2c0) (0xc000e765a0) Stream removed, broadcasting: 5
I0127 11:26:54.273356       8 log.go:172] (0xc001caa2c0) (0xc000e763c0) Stream removed, broadcasting: 1
I0127 11:26:54.273403       8 log.go:172] (0xc001caa2c0) Go away received
I0127 11:26:54.273648       8 log.go:172] (0xc001caa2c0) (0xc000e763c0) Stream removed, broadcasting: 1
I0127 11:26:54.273684       8 log.go:172] (0xc001caa2c0) (0xc000e76500) Stream removed, broadcasting: 3
I0127 11:26:54.273703       8 log.go:172] (0xc001caa2c0) (0xc000e765a0) Stream removed, broadcasting: 5
Jan 27 11:26:54.273: INFO: Exec stderr: ""
Jan 27 11:26:54.273: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:54.273: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:54.348434       8 log.go:172] (0xc001caa790) (0xc000e768c0) Create stream
I0127 11:26:54.348517       8 log.go:172] (0xc001caa790) (0xc000e768c0) Stream added, broadcasting: 1
I0127 11:26:54.352051       8 log.go:172] (0xc001caa790) Reply frame received for 1
I0127 11:26:54.352075       8 log.go:172] (0xc001caa790) (0xc001a93c20) Create stream
I0127 11:26:54.352084       8 log.go:172] (0xc001caa790) (0xc001a93c20) Stream added, broadcasting: 3
I0127 11:26:54.353156       8 log.go:172] (0xc001caa790) Reply frame received for 3
I0127 11:26:54.353222       8 log.go:172] (0xc001caa790) (0xc001a93cc0) Create stream
I0127 11:26:54.353241       8 log.go:172] (0xc001caa790) (0xc001a93cc0) Stream added, broadcasting: 5
I0127 11:26:54.355188       8 log.go:172] (0xc001caa790) Reply frame received for 5
I0127 11:26:54.454595       8 log.go:172] (0xc001caa790) Data frame received for 3
I0127 11:26:54.454778       8 log.go:172] (0xc001a93c20) (3) Data frame handling
I0127 11:26:54.454831       8 log.go:172] (0xc001a93c20) (3) Data frame sent
I0127 11:26:54.624489       8 log.go:172] (0xc001caa790) Data frame received for 1
I0127 11:26:54.624649       8 log.go:172] (0xc000e768c0) (1) Data frame handling
I0127 11:26:54.624676       8 log.go:172] (0xc000e768c0) (1) Data frame sent
I0127 11:26:54.624755       8 log.go:172] (0xc001caa790) (0xc000e768c0) Stream removed, broadcasting: 1
I0127 11:26:54.624900       8 log.go:172] (0xc001caa790) (0xc001a93c20) Stream removed, broadcasting: 3
I0127 11:26:54.625333       8 log.go:172] (0xc001caa790) (0xc001a93cc0) Stream removed, broadcasting: 5
I0127 11:26:54.625357       8 log.go:172] (0xc001caa790) (0xc000e768c0) Stream removed, broadcasting: 1
I0127 11:26:54.625364       8 log.go:172] (0xc001caa790) (0xc001a93c20) Stream removed, broadcasting: 3
I0127 11:26:54.625370       8 log.go:172] (0xc001caa790) (0xc001a93cc0) Stream removed, broadcasting: 5
Jan 27 11:26:54.625: INFO: Exec stderr: ""
I0127 11:26:54.625846       8 log.go:172] (0xc001caa790) Go away received
Jan 27 11:26:54.625: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:54.625: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:54.695111       8 log.go:172] (0xc0021b42c0) (0xc001a93f40) Create stream
I0127 11:26:54.695176       8 log.go:172] (0xc0021b42c0) (0xc001a93f40) Stream added, broadcasting: 1
I0127 11:26:54.705748       8 log.go:172] (0xc0021b42c0) Reply frame received for 1
I0127 11:26:54.705830       8 log.go:172] (0xc0021b42c0) (0xc0004132c0) Create stream
I0127 11:26:54.705840       8 log.go:172] (0xc0021b42c0) (0xc0004132c0) Stream added, broadcasting: 3
I0127 11:26:54.706582       8 log.go:172] (0xc0021b42c0) Reply frame received for 3
I0127 11:26:54.706604       8 log.go:172] (0xc0021b42c0) (0xc000c94000) Create stream
I0127 11:26:54.706626       8 log.go:172] (0xc0021b42c0) (0xc000c94000) Stream added, broadcasting: 5
I0127 11:26:54.707985       8 log.go:172] (0xc0021b42c0) Reply frame received for 5
I0127 11:26:54.796578       8 log.go:172] (0xc0021b42c0) Data frame received for 3
I0127 11:26:54.796629       8 log.go:172] (0xc0004132c0) (3) Data frame handling
I0127 11:26:54.796656       8 log.go:172] (0xc0004132c0) (3) Data frame sent
I0127 11:26:54.922567       8 log.go:172] (0xc0021b42c0) (0xc0004132c0) Stream removed, broadcasting: 3
I0127 11:26:54.922654       8 log.go:172] (0xc0021b42c0) (0xc000c94000) Stream removed, broadcasting: 5
I0127 11:26:54.922672       8 log.go:172] (0xc0021b42c0) Data frame received for 1
I0127 11:26:54.922678       8 log.go:172] (0xc001a93f40) (1) Data frame handling
I0127 11:26:54.922684       8 log.go:172] (0xc001a93f40) (1) Data frame sent
I0127 11:26:54.922691       8 log.go:172] (0xc0021b42c0) (0xc001a93f40) Stream removed, broadcasting: 1
I0127 11:26:54.922700       8 log.go:172] (0xc0021b42c0) Go away received
I0127 11:26:54.922866       8 log.go:172] (0xc0021b42c0) (0xc001a93f40) Stream removed, broadcasting: 1
I0127 11:26:54.922884       8 log.go:172] (0xc0021b42c0) (0xc0004132c0) Stream removed, broadcasting: 3
I0127 11:26:54.922891       8 log.go:172] (0xc0021b42c0) (0xc000c94000) Stream removed, broadcasting: 5
Jan 27 11:26:54.922: INFO: Exec stderr: ""
Jan 27 11:26:54.922: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jvbcf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:26:54.922: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:26:54.999095       8 log.go:172] (0xc0016f8dc0) (0xc000d33e00) Create stream
I0127 11:26:54.999151       8 log.go:172] (0xc0016f8dc0) (0xc000d33e00) Stream added, broadcasting: 1
I0127 11:26:55.003920       8 log.go:172] (0xc0016f8dc0) Reply frame received for 1
I0127 11:26:55.003971       8 log.go:172] (0xc0016f8dc0) (0xc000413400) Create stream
I0127 11:26:55.003991       8 log.go:172] (0xc0016f8dc0) (0xc000413400) Stream added, broadcasting: 3
I0127 11:26:55.004922       8 log.go:172] (0xc0016f8dc0) Reply frame received for 3
I0127 11:26:55.004942       8 log.go:172] (0xc0016f8dc0) (0xc000c94140) Create stream
I0127 11:26:55.004949       8 log.go:172] (0xc0016f8dc0) (0xc000c94140) Stream added, broadcasting: 5
I0127 11:26:55.005754       8 log.go:172] (0xc0016f8dc0) Reply frame received for 5
I0127 11:26:55.207666       8 log.go:172] (0xc0016f8dc0) Data frame received for 3
I0127 11:26:55.207740       8 log.go:172] (0xc000413400) (3) Data frame handling
I0127 11:26:55.207767       8 log.go:172] (0xc000413400) (3) Data frame sent
I0127 11:26:55.325881       8 log.go:172] (0xc0016f8dc0) (0xc000413400) Stream removed, broadcasting: 3
I0127 11:26:55.326025       8 log.go:172] (0xc0016f8dc0) Data frame received for 1
I0127 11:26:55.326036       8 log.go:172] (0xc000d33e00) (1) Data frame handling
I0127 11:26:55.326049       8 log.go:172] (0xc000d33e00) (1) Data frame sent
I0127 11:26:55.326056       8 log.go:172] (0xc0016f8dc0) (0xc000d33e00) Stream removed, broadcasting: 1
I0127 11:26:55.326220       8 log.go:172] (0xc0016f8dc0) (0xc000c94140) Stream removed, broadcasting: 5
I0127 11:26:55.326255       8 log.go:172] (0xc0016f8dc0) (0xc000d33e00) Stream removed, broadcasting: 1
I0127 11:26:55.326270       8 log.go:172] (0xc0016f8dc0) (0xc000413400) Stream removed, broadcasting: 3
I0127 11:26:55.326281       8 log.go:172] (0xc0016f8dc0) (0xc000c94140) Stream removed, broadcasting: 5
I0127 11:26:55.326714       8 log.go:172] (0xc0016f8dc0) Go away received
Jan 27 11:26:55.326: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:26:55.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jvbcf" for this suite.
Jan 27 11:27:41.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:27:41.406: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jvbcf, resource: bindings, ignored listing per whitelist
Jan 27 11:27:41.494: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jvbcf deletion completed in 46.148845498s

• [SLOW TEST:76.462 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:27:41.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:27:41.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ssppf" for this suite.
Jan 27 11:27:47.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:27:47.966: INFO: namespace: e2e-tests-kubelet-test-ssppf, resource: bindings, ignored listing per whitelist
Jan 27 11:27:47.994: INFO: namespace e2e-tests-kubelet-test-ssppf deletion completed in 6.272650758s

• [SLOW TEST:6.500 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:27:47.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 11:27:48.152: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 27 11:27:48.260: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 27 11:27:53.276: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 27 11:27:59.296: INFO: Creating deployment "test-rolling-update-deployment"
Jan 27 11:27:59.312: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 27 11:27:59.331: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 27 11:28:01.367: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 27 11:28:01.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:28:03.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:28:05.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:28:07.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715721279, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:28:09.893: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 27 11:28:10.432: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-8bkll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bkll/deployments/test-rolling-update-deployment,UID:1265fd33-40f8-11ea-a994-fa163e34d433,ResourceVersion:19626875,Generation:1,CreationTimestamp:2020-01-27 11:27:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-27 11:27:59 +0000 UTC 2020-01-27 11:27:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-27 11:28:09 +0000 UTC 2020-01-27 11:27:59 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 27 11:28:10.467: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-8bkll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bkll/replicasets/test-rolling-update-deployment-75db98fb4c,UID:12717fdc-40f8-11ea-a994-fa163e34d433,ResourceVersion:19626865,Generation:1,CreationTimestamp:2020-01-27 11:27:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1265fd33-40f8-11ea-a994-fa163e34d433 0xc0019749d7 0xc0019749d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 27 11:28:10.467: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 27 11:28:10.467: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-8bkll,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bkll/replicasets/test-rolling-update-controller,UID:0bc186b3-40f8-11ea-a994-fa163e34d433,ResourceVersion:19626873,Generation:2,CreationTimestamp:2020-01-27 11:27:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 1265fd33-40f8-11ea-a994-fa163e34d433 0xc00197483f 0xc001974880}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 11:28:10.488: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wv2zk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wv2zk,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-8bkll,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8bkll/pods/test-rolling-update-deployment-75db98fb4c-wv2zk,UID:127fcc75-40f8-11ea-a994-fa163e34d433,ResourceVersion:19626864,Generation:0,CreationTimestamp:2020-01-27 11:27:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 12717fdc-40f8-11ea-a994-fa163e34d433 0xc001975507 0xc001975508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cb6lm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cb6lm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-cb6lm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019755a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019755d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:27:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:28:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:28:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:27:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-27 11:27:59 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-27 11:28:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3b367dd55458798ace3fc91c36982ac24a3dfb4d17b00db42130872785c87561}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:28:10.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8bkll" for this suite.
Jan 27 11:28:18.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:28:18.739: INFO: namespace: e2e-tests-deployment-8bkll, resource: bindings, ignored listing per whitelist
Jan 27 11:28:18.852: INFO: namespace e2e-tests-deployment-8bkll deletion completed in 8.276226174s

• [SLOW TEST:30.858 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:28:18.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 27 11:28:20.278: INFO: Waiting up to 5m0s for pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-containers-vxbzz" to be "success or failure"
Jan 27 11:28:20.293: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.569373ms
Jan 27 11:28:22.703: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42510756s
Jan 27 11:28:24.729: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450313203s
Jan 27 11:28:26.878: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.60011195s
Jan 27 11:28:28.908: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629518707s
Jan 27 11:28:30.917: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638715715s
Jan 27 11:28:33.801: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.522280454s
STEP: Saw pod success
Jan 27 11:28:33.801: INFO: Pod "client-containers-1edbae0d-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:28:33.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1edbae0d-40f8-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:28:34.115: INFO: Waiting for pod client-containers-1edbae0d-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:28:34.130: INFO: Pod client-containers-1edbae0d-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:28:34.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vxbzz" for this suite.
Jan 27 11:28:40.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:28:41.074: INFO: namespace: e2e-tests-containers-vxbzz, resource: bindings, ignored listing per whitelist
Jan 27 11:28:41.115: INFO: namespace e2e-tests-containers-vxbzz deletion completed in 6.971536274s

• [SLOW TEST:22.263 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:28:41.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 27 11:28:41.489: INFO: Waiting up to 5m0s for pod "pod-2b89e654-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-m8dxp" to be "success or failure"
Jan 27 11:28:41.581: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.255465ms
Jan 27 11:28:43.860: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370449224s
Jan 27 11:28:45.871: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381455396s
Jan 27 11:28:47.959: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469087813s
Jan 27 11:28:49.992: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50218911s
Jan 27 11:28:52.020: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.530916517s
STEP: Saw pod success
Jan 27 11:28:52.021: INFO: Pod "pod-2b89e654-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:28:52.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2b89e654-40f8-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:28:52.233: INFO: Waiting for pod pod-2b89e654-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:28:52.249: INFO: Pod pod-2b89e654-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:28:52.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-m8dxp" for this suite.
Jan 27 11:28:58.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:28:58.641: INFO: namespace: e2e-tests-emptydir-m8dxp, resource: bindings, ignored listing per whitelist
Jan 27 11:28:58.726: INFO: namespace e2e-tests-emptydir-m8dxp deletion completed in 6.468215883s

• [SLOW TEST:17.611 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:28:58.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 27 11:28:58.921: INFO: Waiting up to 5m0s for pod "pod-35ee975b-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-n2482" to be "success or failure"
Jan 27 11:28:58.943: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.691807ms
Jan 27 11:29:00.951: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03083613s
Jan 27 11:29:02.997: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076561811s
Jan 27 11:29:05.016: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095387451s
Jan 27 11:29:07.037: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116214476s
Jan 27 11:29:09.176: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255893854s
STEP: Saw pod success
Jan 27 11:29:09.177: INFO: Pod "pod-35ee975b-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:29:09.204: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-35ee975b-40f8-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:29:09.334: INFO: Waiting for pod pod-35ee975b-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:29:09.355: INFO: Pod pod-35ee975b-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:29:09.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n2482" for this suite.
Jan 27 11:29:15.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:29:15.653: INFO: namespace: e2e-tests-emptydir-n2482, resource: bindings, ignored listing per whitelist
Jan 27 11:29:15.657: INFO: namespace e2e-tests-emptydir-n2482 deletion completed in 6.283211342s

• [SLOW TEST:16.930 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:29:15.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0127 11:29:30.425009       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 11:29:30.425: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:29:30.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4x94m" for this suite.
Jan 27 11:29:46.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:29:46.862: INFO: namespace: e2e-tests-gc-4x94m, resource: bindings, ignored listing per whitelist
Jan 27 11:29:46.883: INFO: namespace e2e-tests-gc-4x94m deletion completed in 16.446774951s

• [SLOW TEST:31.226 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:29:46.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-52a3409d-40f8-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 11:29:47.450: INFO: Waiting up to 5m0s for pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-2m2dh" to be "success or failure"
Jan 27 11:29:47.481: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.313909ms
Jan 27 11:29:50.940: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.49069412s
Jan 27 11:29:52.964: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.514250536s
Jan 27 11:29:54.988: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.53832229s
Jan 27 11:29:56.999: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.549259235s
Jan 27 11:29:59.022: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.572379907s
Jan 27 11:30:01.045: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.595766722s
Jan 27 11:30:03.080: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.630799207s
Jan 27 11:30:05.102: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.652913474s
Jan 27 11:30:07.128: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.678550826s
STEP: Saw pod success
Jan 27 11:30:07.128: INFO: Pod "pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:30:07.141: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 27 11:30:07.397: INFO: Waiting for pod pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:30:07.441: INFO: Pod pod-secrets-52c7fdeb-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:30:07.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2m2dh" for this suite.
Jan 27 11:30:15.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:30:15.564: INFO: namespace: e2e-tests-secrets-2m2dh, resource: bindings, ignored listing per whitelist
Jan 27 11:30:15.682: INFO: namespace e2e-tests-secrets-2m2dh deletion completed in 8.224326731s
STEP: Destroying namespace "e2e-tests-secret-namespace-splf5" for this suite.
Jan 27 11:30:21.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:30:21.797: INFO: namespace: e2e-tests-secret-namespace-splf5, resource: bindings, ignored listing per whitelist
Jan 27 11:30:21.948: INFO: namespace e2e-tests-secret-namespace-splf5 deletion completed in 6.26624857s

• [SLOW TEST:35.065 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:30:21.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-678beac4-40f8-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 11:30:22.181: INFO: Waiting up to 5m0s for pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-d7zfs" to be "success or failure"
Jan 27 11:30:22.188: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.152921ms
Jan 27 11:30:24.534: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353427463s
Jan 27 11:30:26.563: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382443711s
Jan 27 11:30:28.682: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501214327s
Jan 27 11:30:30.725: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543711026s
Jan 27 11:30:32.756: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.575287695s
STEP: Saw pod success
Jan 27 11:30:32.756: INFO: Pod "pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:30:32.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 27 11:30:32.961: INFO: Waiting for pod pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:30:32.974: INFO: Pod pod-secrets-678d2f5c-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:30:32.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d7zfs" for this suite.
Jan 27 11:30:39.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:30:39.085: INFO: namespace: e2e-tests-secrets-d7zfs, resource: bindings, ignored listing per whitelist
Jan 27 11:30:39.196: INFO: namespace e2e-tests-secrets-d7zfs deletion completed in 6.20201953s

• [SLOW TEST:17.247 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:30:39.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-d6mws
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 11:30:39.458: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 11:31:17.632: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-d6mws PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:31:17.632: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:31:17.717930       8 log.go:172] (0xc000907c30) (0xc00150e780) Create stream
I0127 11:31:17.718011       8 log.go:172] (0xc000907c30) (0xc00150e780) Stream added, broadcasting: 1
I0127 11:31:17.727254       8 log.go:172] (0xc000907c30) Reply frame received for 1
I0127 11:31:17.727294       8 log.go:172] (0xc000907c30) (0xc00150e820) Create stream
I0127 11:31:17.727306       8 log.go:172] (0xc000907c30) (0xc00150e820) Stream added, broadcasting: 3
I0127 11:31:17.728817       8 log.go:172] (0xc000907c30) Reply frame received for 3
I0127 11:31:17.728855       8 log.go:172] (0xc000907c30) (0xc0021780a0) Create stream
I0127 11:31:17.728866       8 log.go:172] (0xc000907c30) (0xc0021780a0) Stream added, broadcasting: 5
I0127 11:31:17.730459       8 log.go:172] (0xc000907c30) Reply frame received for 5
I0127 11:31:18.091424       8 log.go:172] (0xc000907c30) Data frame received for 3
I0127 11:31:18.091466       8 log.go:172] (0xc00150e820) (3) Data frame handling
I0127 11:31:18.091479       8 log.go:172] (0xc00150e820) (3) Data frame sent
I0127 11:31:18.216316       8 log.go:172] (0xc000907c30) (0xc00150e820) Stream removed, broadcasting: 3
I0127 11:31:18.216651       8 log.go:172] (0xc000907c30) Data frame received for 1
I0127 11:31:18.216717       8 log.go:172] (0xc000907c30) (0xc0021780a0) Stream removed, broadcasting: 5
I0127 11:31:18.216789       8 log.go:172] (0xc00150e780) (1) Data frame handling
I0127 11:31:18.216807       8 log.go:172] (0xc00150e780) (1) Data frame sent
I0127 11:31:18.216823       8 log.go:172] (0xc000907c30) (0xc00150e780) Stream removed, broadcasting: 1
I0127 11:31:18.216872       8 log.go:172] (0xc000907c30) Go away received
I0127 11:31:18.217132       8 log.go:172] (0xc000907c30) (0xc00150e780) Stream removed, broadcasting: 1
I0127 11:31:18.217153       8 log.go:172] (0xc000907c30) (0xc00150e820) Stream removed, broadcasting: 3
I0127 11:31:18.217179       8 log.go:172] (0xc000907c30) (0xc0021780a0) Stream removed, broadcasting: 5
Jan 27 11:31:18.217: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:31:18.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-d6mws" for this suite.
Jan 27 11:31:44.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:31:44.345: INFO: namespace: e2e-tests-pod-network-test-d6mws, resource: bindings, ignored listing per whitelist
Jan 27 11:31:44.375: INFO: namespace e2e-tests-pod-network-test-d6mws deletion completed in 26.14145771s

• [SLOW TEST:65.179 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:31:44.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 27 11:31:44.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:46.750: INFO: stderr: ""
Jan 27 11:31:46.751: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 11:31:46.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:46.950: INFO: stderr: ""
Jan 27 11:31:46.950: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
Jan 27 11:31:46.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:47.173: INFO: stderr: ""
Jan 27 11:31:47.173: INFO: stdout: ""
Jan 27 11:31:47.173: INFO: update-demo-nautilus-4n2s7 is created but not running
Jan 27 11:31:52.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:52.341: INFO: stderr: ""
Jan 27 11:31:52.341: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
Jan 27 11:31:52.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:52.486: INFO: stderr: ""
Jan 27 11:31:52.486: INFO: stdout: ""
Jan 27 11:31:52.486: INFO: update-demo-nautilus-4n2s7 is created but not running
Jan 27 11:31:57.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:57.623: INFO: stderr: ""
Jan 27 11:31:57.623: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
Jan 27 11:31:57.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:31:57.712: INFO: stderr: ""
Jan 27 11:31:57.712: INFO: stdout: ""
Jan 27 11:31:57.712: INFO: update-demo-nautilus-4n2s7 is created but not running
Jan 27 11:32:02.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:02.920: INFO: stderr: ""
Jan 27 11:32:02.920: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
Jan 27 11:32:02.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:03.135: INFO: stderr: ""
Jan 27 11:32:03.135: INFO: stdout: "true"
Jan 27 11:32:03.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:03.233: INFO: stderr: ""
Jan 27 11:32:03.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:03.233: INFO: validating pod update-demo-nautilus-4n2s7
Jan 27 11:32:03.267: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:03.267: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:03.268: INFO: update-demo-nautilus-4n2s7 is verified up and running
Jan 27 11:32:03.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rn6pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:03.382: INFO: stderr: ""
Jan 27 11:32:03.382: INFO: stdout: "true"
Jan 27 11:32:03.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rn6pv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:03.507: INFO: stderr: ""
Jan 27 11:32:03.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:03.507: INFO: validating pod update-demo-nautilus-rn6pv
Jan 27 11:32:03.530: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:03.530: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:03.530: INFO: update-demo-nautilus-rn6pv is verified up and running
STEP: scaling down the replication controller
Jan 27 11:32:03.533: INFO: scanned /root for discovery docs: 
Jan 27 11:32:03.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:04.699: INFO: stderr: ""
Jan 27 11:32:04.699: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 11:32:04.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:04.797: INFO: stderr: ""
Jan 27 11:32:04.797: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 27 11:32:09.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:09.964: INFO: stderr: ""
Jan 27 11:32:09.964: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-rn6pv "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 27 11:32:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:15.064: INFO: stderr: ""
Jan 27 11:32:15.064: INFO: stdout: "update-demo-nautilus-4n2s7 "
Jan 27 11:32:15.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:15.154: INFO: stderr: ""
Jan 27 11:32:15.154: INFO: stdout: "true"
Jan 27 11:32:15.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:15.260: INFO: stderr: ""
Jan 27 11:32:15.260: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:15.260: INFO: validating pod update-demo-nautilus-4n2s7
Jan 27 11:32:15.268: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:15.268: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:15.268: INFO: update-demo-nautilus-4n2s7 is verified up and running
STEP: scaling up the replication controller
Jan 27 11:32:15.269: INFO: scanned /root for discovery docs: 
Jan 27 11:32:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:16.834: INFO: stderr: ""
Jan 27 11:32:16.834: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 11:32:16.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:16.964: INFO: stderr: ""
Jan 27 11:32:16.964: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-x2hgp "
Jan 27 11:32:16.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:17.079: INFO: stderr: ""
Jan 27 11:32:17.079: INFO: stdout: "true"
Jan 27 11:32:17.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:17.214: INFO: stderr: ""
Jan 27 11:32:17.214: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:17.214: INFO: validating pod update-demo-nautilus-4n2s7
Jan 27 11:32:17.220: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:17.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:17.220: INFO: update-demo-nautilus-4n2s7 is verified up and running
Jan 27 11:32:17.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2hgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:17.315: INFO: stderr: ""
Jan 27 11:32:17.315: INFO: stdout: ""
Jan 27 11:32:17.315: INFO: update-demo-nautilus-x2hgp is created but not running
Jan 27 11:32:22.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:22.562: INFO: stderr: ""
Jan 27 11:32:22.562: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-x2hgp "
Jan 27 11:32:22.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:22.649: INFO: stderr: ""
Jan 27 11:32:22.649: INFO: stdout: "true"
Jan 27 11:32:22.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:22.735: INFO: stderr: ""
Jan 27 11:32:22.735: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:22.735: INFO: validating pod update-demo-nautilus-4n2s7
Jan 27 11:32:22.748: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:22.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:22.748: INFO: update-demo-nautilus-4n2s7 is verified up and running
Jan 27 11:32:22.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2hgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:22.828: INFO: stderr: ""
Jan 27 11:32:22.828: INFO: stdout: ""
Jan 27 11:32:22.828: INFO: update-demo-nautilus-x2hgp is created but not running
Jan 27 11:32:27.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.048: INFO: stderr: ""
Jan 27 11:32:28.048: INFO: stdout: "update-demo-nautilus-4n2s7 update-demo-nautilus-x2hgp "
Jan 27 11:32:28.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.149: INFO: stderr: ""
Jan 27 11:32:28.149: INFO: stdout: "true"
Jan 27 11:32:28.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n2s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.268: INFO: stderr: ""
Jan 27 11:32:28.268: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:28.268: INFO: validating pod update-demo-nautilus-4n2s7
Jan 27 11:32:28.274: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:28.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:28.274: INFO: update-demo-nautilus-4n2s7 is verified up and running
Jan 27 11:32:28.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2hgp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.400: INFO: stderr: ""
Jan 27 11:32:28.400: INFO: stdout: "true"
Jan 27 11:32:28.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x2hgp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.513: INFO: stderr: ""
Jan 27 11:32:28.513: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:32:28.513: INFO: validating pod update-demo-nautilus-x2hgp
Jan 27 11:32:28.537: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:32:28.538: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:32:28.538: INFO: update-demo-nautilus-x2hgp is verified up and running
STEP: using delete to clean up resources
Jan 27 11:32:28.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 11:32:28.656: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 27 11:32:28.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tzs8n'
Jan 27 11:32:28.795: INFO: stderr: "No resources found.\n"
Jan 27 11:32:28.796: INFO: stdout: ""
Jan 27 11:32:28.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tzs8n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 11:32:28.952: INFO: stderr: ""
Jan 27 11:32:28.952: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:32:28.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tzs8n" for this suite.
Jan 27 11:32:53.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:32:53.139: INFO: namespace: e2e-tests-kubectl-tzs8n, resource: bindings, ignored listing per whitelist
Jan 27 11:32:53.145: INFO: namespace e2e-tests-kubectl-tzs8n deletion completed in 24.171472627s

• [SLOW TEST:68.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:32:53.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 11:32:53.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-mzqd8" to be "success or failure"
Jan 27 11:32:53.477: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.644931ms
Jan 27 11:32:55.692: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242337994s
Jan 27 11:32:57.708: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257819365s
Jan 27 11:32:59.768: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318295123s
Jan 27 11:33:02.128: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678119422s
Jan 27 11:33:04.256: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.805702536s
STEP: Saw pod success
Jan 27 11:33:04.256: INFO: Pod "downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:33:04.570: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 11:33:04.750: INFO: Waiting for pod downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005 to disappear
Jan 27 11:33:04.767: INFO: Pod downwardapi-volume-c1af8c5e-40f8-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:33:04.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mzqd8" for this suite.
Jan 27 11:33:10.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:33:10.925: INFO: namespace: e2e-tests-projected-mzqd8, resource: bindings, ignored listing per whitelist
Jan 27 11:33:10.969: INFO: namespace e2e-tests-projected-mzqd8 deletion completed in 6.192846067s

• [SLOW TEST:17.824 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:33:10.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 11:33:11.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:33:21.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xdvms" for this suite.
Jan 27 11:34:03.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:34:03.517: INFO: namespace: e2e-tests-pods-xdvms, resource: bindings, ignored listing per whitelist
Jan 27 11:34:03.545: INFO: namespace e2e-tests-pods-xdvms deletion completed in 42.212946963s

• [SLOW TEST:52.576 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:34:03.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 27 11:34:03.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 27 11:34:03.820: INFO: Waiting for terminating namespaces to be deleted...
Jan 27 11:34:03.826: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 27 11:34:03.859: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 27 11:34:03.859: INFO: 	Container coredns ready: true, restart count 0
Jan 27 11:34:03.859: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 11:34:03.859: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 11:34:03.859: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 11:34:03.860: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 27 11:34:03.860: INFO: 	Container coredns ready: true, restart count 0
Jan 27 11:34:03.860: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 27 11:34:03.860: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 27 11:34:03.860: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 11:34:03.860: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 27 11:34:03.860: INFO: 	Container weave ready: true, restart count 0
Jan 27 11:34:03.860: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f1c1c235-40f8-11ea-b652-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f1c1c235-40f8-11ea-b652-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f1c1c235-40f8-11ea-b652-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:34:24.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-d97w9" for this suite.
Jan 27 11:34:48.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:34:48.339: INFO: namespace: e2e-tests-sched-pred-d97w9, resource: bindings, ignored listing per whitelist
Jan 27 11:34:48.472: INFO: namespace e2e-tests-sched-pred-d97w9 deletion completed in 24.218947939s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:44.926 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:34:48.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 27 11:34:48.719: INFO: Waiting up to 5m0s for pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005" in namespace "e2e-tests-var-expansion-n89g6" to be "success or failure"
Jan 27 11:34:48.728: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.033015ms
Jan 27 11:34:50.770: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051595752s
Jan 27 11:34:52.787: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068250177s
Jan 27 11:34:55.237: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518241732s
Jan 27 11:34:57.272: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553541254s
Jan 27 11:34:59.320: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601197504s
STEP: Saw pod success
Jan 27 11:34:59.320: INFO: Pod "var-expansion-066d41c1-40f9-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:34:59.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-066d41c1-40f9-11ea-b652-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 27 11:34:59.602: INFO: Waiting for pod var-expansion-066d41c1-40f9-11ea-b652-0242ac110005 to disappear
Jan 27 11:34:59.626: INFO: Pod var-expansion-066d41c1-40f9-11ea-b652-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:34:59.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-n89g6" for this suite.
Jan 27 11:35:05.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:35:05.864: INFO: namespace: e2e-tests-var-expansion-n89g6, resource: bindings, ignored listing per whitelist
Jan 27 11:35:05.977: INFO: namespace e2e-tests-var-expansion-n89g6 deletion completed in 6.337767386s

• [SLOW TEST:17.503 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:35:05.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 11:35:06.248: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:35:07.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-5x8xp" for this suite.
Jan 27 11:35:13.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:35:13.529: INFO: namespace: e2e-tests-custom-resource-definition-5x8xp, resource: bindings, ignored listing per whitelist
Jan 27 11:35:13.695: INFO: namespace e2e-tests-custom-resource-definition-5x8xp deletion completed in 6.28037444s

• [SLOW TEST:7.718 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:35:13.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-15731df3-40f9-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 11:35:14.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-t2d2r" to be "success or failure"
Jan 27 11:35:14.025: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.14167ms
Jan 27 11:35:16.106: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094555648s
Jan 27 11:35:18.121: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10948585s
Jan 27 11:35:21.179: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.167225587s
Jan 27 11:35:23.200: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188790462s
Jan 27 11:35:25.210: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.199000177s
STEP: Saw pod success
Jan 27 11:35:25.211: INFO: Pod "pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:35:25.219: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 27 11:35:26.108: INFO: Waiting for pod pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005 to disappear
Jan 27 11:35:26.484: INFO: Pod pod-configmaps-157dd215-40f9-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:35:26.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t2d2r" for this suite.
Jan 27 11:35:34.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:35:34.735: INFO: namespace: e2e-tests-configmap-t2d2r, resource: bindings, ignored listing per whitelist
Jan 27 11:35:34.847: INFO: namespace e2e-tests-configmap-t2d2r deletion completed in 8.342323238s

• [SLOW TEST:21.152 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:35:34.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 27 11:35:55.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:35:55.258: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:35:57.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:35:57.326: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:35:59.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:35:59.282: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:01.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:01.280: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:03.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:03.498: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:05.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:05.280: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:07.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:07.280: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:09.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:09.283: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:11.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:11.290: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:13.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:13.275: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:15.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:15.275: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:17.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:17.288: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 27 11:36:19.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 27 11:36:19.276: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:36:19.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lbr5z" for this suite.
Jan 27 11:36:43.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:36:43.650: INFO: namespace: e2e-tests-container-lifecycle-hook-lbr5z, resource: bindings, ignored listing per whitelist
Jan 27 11:36:43.663: INFO: namespace e2e-tests-container-lifecycle-hook-lbr5z deletion completed in 24.349998439s

• [SLOW TEST:68.815 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:36:43.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 11:36:43.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-d8lzd" to be "success or failure"
Jan 27 11:36:44.003: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.147644ms
Jan 27 11:36:46.040: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12227949s
Jan 27 11:36:48.056: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138449621s
Jan 27 11:36:50.309: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391958953s
Jan 27 11:36:52.325: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40711371s
Jan 27 11:36:54.337: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419701951s
STEP: Saw pod success
Jan 27 11:36:54.337: INFO: Pod "downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:36:54.341: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 11:36:54.703: INFO: Waiting for pod downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005 to disappear
Jan 27 11:36:54.738: INFO: Pod downwardapi-volume-4b15c723-40f9-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:36:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d8lzd" for this suite.
Jan 27 11:37:02.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:37:02.108: INFO: namespace: e2e-tests-downward-api-d8lzd, resource: bindings, ignored listing per whitelist
Jan 27 11:37:02.261: INFO: namespace e2e-tests-downward-api-d8lzd deletion completed in 7.49501446s

• [SLOW TEST:18.598 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:37:02.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 27 11:37:02.447: INFO: Waiting up to 5m0s for pod "pod-56215c23-40f9-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-zb4tf" to be "success or failure"
Jan 27 11:37:02.619: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 171.197225ms
Jan 27 11:37:04.650: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202289849s
Jan 27 11:37:06.663: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21570756s
Jan 27 11:37:09.023: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575842918s
Jan 27 11:37:11.037: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5899234s
Jan 27 11:37:13.053: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.605564293s
Jan 27 11:37:15.062: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.615080475s
STEP: Saw pod success
Jan 27 11:37:15.062: INFO: Pod "pod-56215c23-40f9-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:37:15.066: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-56215c23-40f9-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:37:16.085: INFO: Waiting for pod pod-56215c23-40f9-11ea-b652-0242ac110005 to disappear
Jan 27 11:37:16.505: INFO: Pod pod-56215c23-40f9-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:37:16.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zb4tf" for this suite.
Jan 27 11:37:22.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:37:22.699: INFO: namespace: e2e-tests-emptydir-zb4tf, resource: bindings, ignored listing per whitelist
Jan 27 11:37:22.730: INFO: namespace e2e-tests-emptydir-zb4tf deletion completed in 6.199979898s

• [SLOW TEST:20.469 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:37:22.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 27 11:37:33.548: INFO: Successfully updated pod "labelsupdate62556b99-40f9-11ea-b652-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:37:35.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6d9k9" for this suite.
Jan 27 11:37:59.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:37:59.948: INFO: namespace: e2e-tests-projected-6d9k9, resource: bindings, ignored listing per whitelist
Jan 27 11:38:00.030: INFO: namespace e2e-tests-projected-6d9k9 deletion completed in 24.221544478s

• [SLOW TEST:37.300 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:38:00.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 27 11:38:12.882: INFO: Successfully updated pod "pod-update-78975c99-40f9-11ea-b652-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 27 11:38:12.903: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:38:12.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5gd7p" for this suite.
Jan 27 11:38:36.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:38:36.983: INFO: namespace: e2e-tests-pods-5gd7p, resource: bindings, ignored listing per whitelist
Jan 27 11:38:37.043: INFO: namespace e2e-tests-pods-5gd7p deletion completed in 24.131023353s

• [SLOW TEST:37.012 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:38:37.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 27 11:38:49.887: INFO: Successfully updated pod "annotationupdate8e98f1a0-40f9-11ea-b652-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:38:51.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t5fgp" for this suite.
Jan 27 11:39:16.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:39:16.197: INFO: namespace: e2e-tests-downward-api-t5fgp, resource: bindings, ignored listing per whitelist
Jan 27 11:39:16.243: INFO: namespace e2e-tests-downward-api-t5fgp deletion completed in 24.247317891s

• [SLOW TEST:39.200 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:39:16.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 27 11:39:16.551: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:39:40.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-rrdkp" for this suite.
Jan 27 11:40:04.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:40:04.954: INFO: namespace: e2e-tests-init-container-rrdkp, resource: bindings, ignored listing per whitelist
Jan 27 11:40:05.073: INFO: namespace e2e-tests-init-container-rrdkp deletion completed in 24.328425746s

• [SLOW TEST:48.830 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:40:05.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 11:40:05.210: INFO: Creating deployment "test-recreate-deployment"
Jan 27 11:40:05.230: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 27 11:40:05.277: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 27 11:40:07.297: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 27 11:40:07.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:40:09.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:40:11.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:40:13.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:40:15.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715722005, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 27 11:40:17.341: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 27 11:40:17.364: INFO: Updating deployment test-recreate-deployment
Jan 27 11:40:17.364: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 27 11:40:20.357: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-gc9dg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc9dg/deployments/test-recreate-deployment,UID:c313e26b-40f9-11ea-a994-fa163e34d433,ResourceVersion:19628532,Generation:2,CreationTimestamp:2020-01-27 11:40:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-27 11:40:20 +0000 UTC 2020-01-27 11:40:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-27 11:40:20 +0000 UTC 2020-01-27 11:40:05 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 27 11:40:20.368: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-gc9dg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc9dg/replicasets/test-recreate-deployment-589c4bfd,UID:cab79e18-40f9-11ea-a994-fa163e34d433,ResourceVersion:19628529,Generation:1,CreationTimestamp:2020-01-27 11:40:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c313e26b-40f9-11ea-a994-fa163e34d433 0xc0016e5c6f 0xc0016e5c80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 11:40:20.368: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 27 11:40:20.368: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-gc9dg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gc9dg/replicasets/test-recreate-deployment-5bf7f65dc,UID:c31d872c-40f9-11ea-a994-fa163e34d433,ResourceVersion:19628520,Generation:2,CreationTimestamp:2020-01-27 11:40:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c313e26b-40f9-11ea-a994-fa163e34d433 0xc0016e5f30 0xc0016e5f31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 11:40:20.374: INFO: Pod "test-recreate-deployment-589c4bfd-6484v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-6484v,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-gc9dg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gc9dg/pods/test-recreate-deployment-589c4bfd-6484v,UID:cae10f4b-40f9-11ea-a994-fa163e34d433,ResourceVersion:19628533,Generation:0,CreationTimestamp:2020-01-27 11:40:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd cab79e18-40f9-11ea-a994-fa163e34d433 0xc001ee07ef 0xc001ee0840}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xgflb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xgflb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xgflb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ee0930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ee0950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:40:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:40:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:40:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 11:40:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 11:40:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:40:20.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gc9dg" for this suite.
Jan 27 11:40:32.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:40:32.652: INFO: namespace: e2e-tests-deployment-gc9dg, resource: bindings, ignored listing per whitelist
Jan 27 11:40:32.714: INFO: namespace e2e-tests-deployment-gc9dg deletion completed in 12.331254683s

• [SLOW TEST:27.641 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:40:32.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 11:40:32.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-n8hfc'
Jan 27 11:40:33.081: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 11:40:33.082: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 27 11:40:35.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n8hfc'
Jan 27 11:40:36.172: INFO: stderr: ""
Jan 27 11:40:36.173: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:40:36.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n8hfc" for this suite.
Jan 27 11:40:42.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:40:42.360: INFO: namespace: e2e-tests-kubectl-n8hfc, resource: bindings, ignored listing per whitelist
Jan 27 11:40:42.688: INFO: namespace e2e-tests-kubectl-n8hfc deletion completed in 6.497175004s

• [SLOW TEST:9.973 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:40:42.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 27 11:40:43.026: INFO: Waiting up to 5m0s for pod "pod-d990a90a-40f9-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-rcxkr" to be "success or failure"
Jan 27 11:40:43.046: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.3848ms
Jan 27 11:40:45.483: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456179816s
Jan 27 11:40:47.503: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476177381s
Jan 27 11:40:49.525: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498325399s
Jan 27 11:40:51.980: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953848934s
Jan 27 11:40:54.134: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.107176038s
STEP: Saw pod success
Jan 27 11:40:54.134: INFO: Pod "pod-d990a90a-40f9-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:40:54.140: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d990a90a-40f9-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 11:40:54.272: INFO: Waiting for pod pod-d990a90a-40f9-11ea-b652-0242ac110005 to disappear
Jan 27 11:40:54.612: INFO: Pod pod-d990a90a-40f9-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:40:54.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rcxkr" for this suite.
Jan 27 11:41:00.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:41:01.010: INFO: namespace: e2e-tests-emptydir-rcxkr, resource: bindings, ignored listing per whitelist
Jan 27 11:41:01.014: INFO: namespace e2e-tests-emptydir-rcxkr deletion completed in 6.363483998s

• [SLOW TEST:18.326 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:41:01.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:42:01.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qq954" for this suite.
Jan 27 11:42:25.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:42:25.708: INFO: namespace: e2e-tests-container-probe-qq954, resource: bindings, ignored listing per whitelist
Jan 27 11:42:25.794: INFO: namespace e2e-tests-container-probe-qq954 deletion completed in 24.155005628s

• [SLOW TEST:84.779 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:42:25.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-rwhhh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwhhh to expose endpoints map[]
Jan 27 11:42:26.018: INFO: Get endpoints failed (21.155034ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 27 11:42:27.031: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwhhh exposes endpoints map[] (1.034505981s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-rwhhh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwhhh to expose endpoints map[pod1:[80]]
Jan 27 11:42:31.459: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.387464169s elapsed, will retry)
Jan 27 11:42:37.043: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.971624858s elapsed, will retry)
Jan 27 11:42:38.059: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwhhh exposes endpoints map[pod1:[80]] (10.987851221s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-rwhhh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwhhh to expose endpoints map[pod1:[80] pod2:[80]]
Jan 27 11:42:44.692: INFO: Unexpected endpoints: found map[179f5e31-40fa-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (6.591601562s elapsed, will retry)
Jan 27 11:42:48.811: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwhhh exposes endpoints map[pod1:[80] pod2:[80]] (10.710540017s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-rwhhh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwhhh to expose endpoints map[pod2:[80]]
Jan 27 11:42:49.952: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwhhh exposes endpoints map[pod2:[80]] (1.122723032s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-rwhhh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rwhhh to expose endpoints map[]
Jan 27 11:42:51.135: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rwhhh exposes endpoints map[] (1.17285559s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:42:51.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-rwhhh" for this suite.
Jan 27 11:43:15.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:43:15.900: INFO: namespace: e2e-tests-services-rwhhh, resource: bindings, ignored listing per whitelist
Jan 27 11:43:16.001: INFO: namespace e2e-tests-services-rwhhh deletion completed in 24.310347243s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.207 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:43:16.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8v4p8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 11:43:16.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 11:43:52.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8v4p8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 11:43:52.419: INFO: >>> kubeConfig: /root/.kube/config
I0127 11:43:52.566779       8 log.go:172] (0xc000907e40) (0xc0007312c0) Create stream
I0127 11:43:52.566910       8 log.go:172] (0xc000907e40) (0xc0007312c0) Stream added, broadcasting: 1
I0127 11:43:52.575259       8 log.go:172] (0xc000907e40) Reply frame received for 1
I0127 11:43:52.575307       8 log.go:172] (0xc000907e40) (0xc000c95cc0) Create stream
I0127 11:43:52.575334       8 log.go:172] (0xc000907e40) (0xc000c95cc0) Stream added, broadcasting: 3
I0127 11:43:52.577597       8 log.go:172] (0xc000907e40) Reply frame received for 3
I0127 11:43:52.577649       8 log.go:172] (0xc000907e40) (0xc001d2ebe0) Create stream
I0127 11:43:52.577667       8 log.go:172] (0xc000907e40) (0xc001d2ebe0) Stream added, broadcasting: 5
I0127 11:43:52.579955       8 log.go:172] (0xc000907e40) Reply frame received for 5
I0127 11:43:52.871991       8 log.go:172] (0xc000907e40) Data frame received for 3
I0127 11:43:52.872135       8 log.go:172] (0xc000c95cc0) (3) Data frame handling
I0127 11:43:52.872172       8 log.go:172] (0xc000c95cc0) (3) Data frame sent
I0127 11:43:53.003414       8 log.go:172] (0xc000907e40) Data frame received for 1
I0127 11:43:53.003593       8 log.go:172] (0xc0007312c0) (1) Data frame handling
I0127 11:43:53.003627       8 log.go:172] (0xc0007312c0) (1) Data frame sent
I0127 11:43:53.004089       8 log.go:172] (0xc000907e40) (0xc0007312c0) Stream removed, broadcasting: 1
I0127 11:43:53.004679       8 log.go:172] (0xc000907e40) (0xc000c95cc0) Stream removed, broadcasting: 3
I0127 11:43:53.005110       8 log.go:172] (0xc000907e40) (0xc001d2ebe0) Stream removed, broadcasting: 5
I0127 11:43:53.005169       8 log.go:172] (0xc000907e40) (0xc0007312c0) Stream removed, broadcasting: 1
I0127 11:43:53.005190       8 log.go:172] (0xc000907e40) (0xc000c95cc0) Stream removed, broadcasting: 3
I0127 11:43:53.005209       8 log.go:172] (0xc000907e40) (0xc001d2ebe0) Stream removed, broadcasting: 5
Jan 27 11:43:53.005: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:43:53.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0127 11:43:53.006390       8 log.go:172] (0xc000907e40) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-8v4p8" for this suite.
Jan 27 11:44:19.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:44:19.208: INFO: namespace: e2e-tests-pod-network-test-8v4p8, resource: bindings, ignored listing per whitelist
Jan 27 11:44:19.214: INFO: namespace e2e-tests-pod-network-test-8v4p8 deletion completed in 26.193534004s

• [SLOW TEST:63.214 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:44:19.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 27 11:44:19.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 27 11:44:19.638: INFO: stderr: ""
Jan 27 11:44:19.638: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:44:19.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hzqkw" for this suite.
Jan 27 11:44:25.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:44:25.771: INFO: namespace: e2e-tests-kubectl-hzqkw, resource: bindings, ignored listing per whitelist
Jan 27 11:44:25.886: INFO: namespace e2e-tests-kubectl-hzqkw deletion completed in 6.23161917s

• [SLOW TEST:6.671 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:44:25.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5e90bd4e-40fa-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5e90bd4e-40fa-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:44:38.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2vx24" for this suite.
Jan 27 11:45:02.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:45:02.651: INFO: namespace: e2e-tests-configmap-2vx24, resource: bindings, ignored listing per whitelist
Jan 27 11:45:02.756: INFO: namespace e2e-tests-configmap-2vx24 deletion completed in 24.229629044s

• [SLOW TEST:36.869 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:45:02.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vgfxz
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-vgfxz
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-vgfxz
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-vgfxz
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-vgfxz
Jan 27 11:45:17.251: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vgfxz, name: ss-0, uid: 7bd116c5-40fa-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 27 11:45:22.478: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vgfxz, name: ss-0, uid: 7bd116c5-40fa-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 27 11:45:22.523: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vgfxz, name: ss-0, uid: 7bd116c5-40fa-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 27 11:45:22.667: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-vgfxz
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-vgfxz
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-vgfxz and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 27 11:45:42.122: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vgfxz
Jan 27 11:45:42.190: INFO: Scaling statefulset ss to 0
Jan 27 11:45:52.291: INFO: Waiting for statefulset status.replicas updated to 0
Jan 27 11:45:52.298: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:45:52.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vgfxz" for this suite.
Jan 27 11:46:00.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:46:00.677: INFO: namespace: e2e-tests-statefulset-vgfxz, resource: bindings, ignored listing per whitelist
Jan 27 11:46:00.747: INFO: namespace e2e-tests-statefulset-vgfxz deletion completed in 8.360214442s

• [SLOW TEST:57.991 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:46:00.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-973abac5-40fa-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 11:46:01.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-tnl9f" to be "success or failure"
Jan 27 11:46:01.426: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.464584ms
Jan 27 11:46:03.693: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302382686s
Jan 27 11:46:05.708: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317423653s
Jan 27 11:46:08.346: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955767392s
Jan 27 11:46:10.567: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.176474349s
Jan 27 11:46:12.588: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.197205635s
Jan 27 11:46:14.667: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.276750026s
STEP: Saw pod success
Jan 27 11:46:14.667: INFO: Pod "pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:46:14.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 11:46:15.237: INFO: Waiting for pod pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005 to disappear
Jan 27 11:46:15.249: INFO: Pod pod-projected-configmaps-975d5aa4-40fa-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:46:15.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tnl9f" for this suite.
Jan 27 11:46:21.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:46:21.627: INFO: namespace: e2e-tests-projected-tnl9f, resource: bindings, ignored listing per whitelist
Jan 27 11:46:22.749: INFO: namespace e2e-tests-projected-tnl9f deletion completed in 7.491838684s

• [SLOW TEST:22.002 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:46:22.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-m59l7/configmap-test-a4494db9-40fa-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 11:46:23.190: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-m59l7" to be "success or failure"
Jan 27 11:46:23.229: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.170665ms
Jan 27 11:46:25.254: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063992902s
Jan 27 11:46:27.270: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079823847s
Jan 27 11:46:30.147: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956500596s
Jan 27 11:46:32.176: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.98597833s
Jan 27 11:46:34.195: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.004835254s
STEP: Saw pod success
Jan 27 11:46:34.195: INFO: Pod "pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:46:34.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005 container env-test: 
STEP: delete the pod
Jan 27 11:46:34.309: INFO: Waiting for pod pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005 to disappear
Jan 27 11:46:34.328: INFO: Pod pod-configmaps-a4583007-40fa-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:46:34.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m59l7" for this suite.
Jan 27 11:46:40.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:46:40.702: INFO: namespace: e2e-tests-configmap-m59l7, resource: bindings, ignored listing per whitelist
Jan 27 11:46:40.717: INFO: namespace e2e-tests-configmap-m59l7 deletion completed in 6.371217584s

• [SLOW TEST:17.967 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:46:40.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 27 11:46:40.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:43.177: INFO: stderr: ""
Jan 27 11:46:43.177: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 11:46:43.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:43.523: INFO: stderr: ""
Jan 27 11:46:43.523: INFO: stdout: "update-demo-nautilus-4fvcw update-demo-nautilus-qtcmc "
Jan 27 11:46:43.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fvcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:43.659: INFO: stderr: ""
Jan 27 11:46:43.659: INFO: stdout: ""
Jan 27 11:46:43.659: INFO: update-demo-nautilus-4fvcw is created but not running
Jan 27 11:46:48.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:48.812: INFO: stderr: ""
Jan 27 11:46:48.812: INFO: stdout: "update-demo-nautilus-4fvcw update-demo-nautilus-qtcmc "
Jan 27 11:46:48.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fvcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:48.982: INFO: stderr: ""
Jan 27 11:46:48.982: INFO: stdout: ""
Jan 27 11:46:48.982: INFO: update-demo-nautilus-4fvcw is created but not running
Jan 27 11:46:53.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:54.105: INFO: stderr: ""
Jan 27 11:46:54.105: INFO: stdout: "update-demo-nautilus-4fvcw update-demo-nautilus-qtcmc "
Jan 27 11:46:54.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fvcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:54.197: INFO: stderr: ""
Jan 27 11:46:54.197: INFO: stdout: ""
Jan 27 11:46:54.197: INFO: update-demo-nautilus-4fvcw is created but not running
Jan 27 11:46:59.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:59.405: INFO: stderr: ""
Jan 27 11:46:59.405: INFO: stdout: "update-demo-nautilus-4fvcw update-demo-nautilus-qtcmc "
Jan 27 11:46:59.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fvcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:59.522: INFO: stderr: ""
Jan 27 11:46:59.522: INFO: stdout: "true"
Jan 27 11:46:59.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4fvcw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:59.619: INFO: stderr: ""
Jan 27 11:46:59.619: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:46:59.619: INFO: validating pod update-demo-nautilus-4fvcw
Jan 27 11:46:59.631: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:46:59.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:46:59.631: INFO: update-demo-nautilus-4fvcw is verified up and running
Jan 27 11:46:59.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtcmc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:59.745: INFO: stderr: ""
Jan 27 11:46:59.745: INFO: stdout: "true"
Jan 27 11:46:59.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtcmc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:46:59.867: INFO: stderr: ""
Jan 27 11:46:59.867: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 11:46:59.867: INFO: validating pod update-demo-nautilus-qtcmc
Jan 27 11:46:59.875: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 11:46:59.875: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 11:46:59.875: INFO: update-demo-nautilus-qtcmc is verified up and running
STEP: using delete to clean up resources
Jan 27 11:46:59.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:47:00.031: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 11:47:00.031: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 27 11:47:00.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-97w62'
Jan 27 11:47:00.163: INFO: stderr: "No resources found.\n"
Jan 27 11:47:00.163: INFO: stdout: ""
Jan 27 11:47:00.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-97w62 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 11:47:00.380: INFO: stderr: ""
Jan 27 11:47:00.380: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:47:00.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-97w62" for this suite.
Jan 27 11:47:26.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:47:26.636: INFO: namespace: e2e-tests-kubectl-97w62, resource: bindings, ignored listing per whitelist
Jan 27 11:47:26.774: INFO: namespace e2e-tests-kubectl-97w62 deletion completed in 26.378241326s

• [SLOW TEST:46.057 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:47:26.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 27 11:47:27.128: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629545,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 11:47:27.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629546,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 27 11:47:27.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629548,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 27 11:47:37.204: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629561,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 11:47:37.204: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629562,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 27 11:47:37.204: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zz6qc,SelfLink:/api/v1/namespaces/e2e-tests-watch-zz6qc/configmaps/e2e-watch-test-label-changed,UID:ca692df4-40fa-11ea-a994-fa163e34d433,ResourceVersion:19629563,Generation:0,CreationTimestamp:2020-01-27 11:47:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:47:37.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zz6qc" for this suite.
Jan 27 11:47:43.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:47:43.589: INFO: namespace: e2e-tests-watch-zz6qc, resource: bindings, ignored listing per whitelist
Jan 27 11:47:44.379: INFO: namespace e2e-tests-watch-zz6qc deletion completed in 7.103835413s

• [SLOW TEST:17.605 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:47:44.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 27 11:47:44.691: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:47:44.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4bkp6" for this suite.
Jan 27 11:47:50.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:47:50.982: INFO: namespace: e2e-tests-kubectl-4bkp6, resource: bindings, ignored listing per whitelist
Jan 27 11:47:50.989: INFO: namespace e2e-tests-kubectl-4bkp6 deletion completed in 6.191371121s

• [SLOW TEST:6.609 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:47:50.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d8d09bdd-40fa-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 11:47:51.211: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-smddb" to be "success or failure"
Jan 27 11:47:51.231: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706911ms
Jan 27 11:47:53.250: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039523722s
Jan 27 11:47:55.267: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055967043s
Jan 27 11:47:57.282: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071462657s
Jan 27 11:47:59.299: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088520859s
Jan 27 11:48:01.316: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105455585s
Jan 27 11:48:03.335: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.124338998s
STEP: Saw pod success
Jan 27 11:48:03.335: INFO: Pod "pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 11:48:03.348: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 27 11:48:03.534: INFO: Waiting for pod pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005 to disappear
Jan 27 11:48:03.553: INFO: Pod pod-configmaps-d8d13f8e-40fa-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:48:03.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-smddb" for this suite.
Jan 27 11:48:09.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:48:09.671: INFO: namespace: e2e-tests-configmap-smddb, resource: bindings, ignored listing per whitelist
Jan 27 11:48:09.916: INFO: namespace e2e-tests-configmap-smddb deletion completed in 6.350998783s

• [SLOW TEST:18.926 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:48:09.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qgrw7
Jan 27 11:48:20.232: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qgrw7
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 11:48:20.242: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:52:21.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qgrw7" for this suite.
Jan 27 11:52:28.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:52:28.248: INFO: namespace: e2e-tests-container-probe-qgrw7, resource: bindings, ignored listing per whitelist
Jan 27 11:52:28.317: INFO: namespace e2e-tests-container-probe-qgrw7 deletion completed in 6.510077351s

• [SLOW TEST:258.401 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:52:28.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 11:52:28.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:52:39.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-m77bs" for this suite.
Jan 27 11:53:25.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:53:25.390: INFO: namespace: e2e-tests-pods-m77bs, resource: bindings, ignored listing per whitelist
Jan 27 11:53:25.400: INFO: namespace e2e-tests-pods-m77bs deletion completed in 46.216058017s

• [SLOW TEST:57.083 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:53:25.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 27 11:53:25.587: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 27 11:53:25.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:26.133: INFO: stderr: ""
Jan 27 11:53:26.133: INFO: stdout: "service/redis-slave created\n"
Jan 27 11:53:26.134: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 27 11:53:26.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:26.664: INFO: stderr: ""
Jan 27 11:53:26.664: INFO: stdout: "service/redis-master created\n"
Jan 27 11:53:26.665: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 27 11:53:26.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:27.161: INFO: stderr: ""
Jan 27 11:53:27.162: INFO: stdout: "service/frontend created\n"
Jan 27 11:53:27.162: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 27 11:53:27.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:27.660: INFO: stderr: ""
Jan 27 11:53:27.660: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 27 11:53:27.661: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 27 11:53:27.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:28.301: INFO: stderr: ""
Jan 27 11:53:28.301: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 27 11:53:28.302: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 27 11:53:28.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6ngw'
Jan 27 11:53:28.915: INFO: stderr: ""
Jan 27 11:53:28.915: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 27 11:53:28.915: INFO: Waiting for all frontend pods to be Running.
Jan 27 11:53:58.967: INFO: Waiting for frontend to serve content.
Jan 27 11:53:59.099: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Jan 27 11:54:04.143: INFO: Trying to add a new entry to the guestbook. Jan 27 11:54:04.174: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 27 11:54:04.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:04.564: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:04.565: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 27 11:54:04.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:04.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:04.933: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 27 11:54:04.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:05.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:05.190: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 27 11:54:05.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:05.315: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:05.315: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 27 11:54:05.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:05.500: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:05.500: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 27 11:54:05.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s6ngw' Jan 27 11:54:06.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 27 11:54:06.082: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:54:06.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s6ngw" for this suite. Jan 27 11:54:58.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:54:58.593: INFO: namespace: e2e-tests-kubectl-s6ngw, resource: bindings, ignored listing per whitelist Jan 27 11:54:58.630: INFO: namespace e2e-tests-kubectl-s6ngw deletion completed in 52.427313196s • [SLOW TEST:93.229 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:54:58.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d7ac5689-40fb-11ea-b652-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 27 11:54:58.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-7x7kh" to be "success or failure" Jan 27 11:54:58.842: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.118297ms Jan 27 11:55:00.887: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109096481s Jan 27 11:55:02.910: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132380307s Jan 27 11:55:05.298: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.520792266s Jan 27 11:55:07.317: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539421723s Jan 27 11:55:09.382: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.604287857s STEP: Saw pod success Jan 27 11:55:09.382: INFO: Pod "pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:55:09.392: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 27 11:55:10.715: INFO: Waiting for pod pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005 to disappear Jan 27 11:55:10.737: INFO: Pod pod-configmaps-d7ad4538-40fb-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:55:10.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7x7kh" for this suite. Jan 27 11:55:16.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:55:17.073: INFO: namespace: e2e-tests-configmap-7x7kh, resource: bindings, ignored listing per whitelist Jan 27 11:55:17.142: INFO: namespace e2e-tests-configmap-7x7kh deletion completed in 6.396239891s • [SLOW TEST:18.512 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:55:17.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 27 11:55:17.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rp5rf' Jan 27 11:55:17.589: INFO: stderr: "" Jan 27 11:55:17.589: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 27 11:55:17.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-rp5rf' Jan 27 11:55:22.822: INFO: stderr: "" Jan 27 11:55:22.822: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:55:22.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rp5rf" for this suite. Jan 27 11:55:28.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:55:28.986: INFO: namespace: e2e-tests-kubectl-rp5rf, resource: bindings, ignored listing per whitelist Jan 27 11:55:29.152: INFO: namespace e2e-tests-kubectl-rp5rf deletion completed in 6.284997373s • [SLOW TEST:12.009 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:55:29.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jan 27 11:55:29.355: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-brbk9" to be "success or failure" Jan 27 11:55:29.373: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.152344ms Jan 27 11:55:31.984: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628209363s Jan 27 11:55:34.007: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.651763243s Jan 27 11:55:36.019: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663928055s Jan 27 11:55:38.206: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.850426909s Jan 27 11:55:40.265: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.910105917s Jan 27 11:55:42.281: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.925804183s Jan 27 11:55:44.458: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.102858172s STEP: Saw pod success Jan 27 11:55:44.458: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 27 11:55:44.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 27 11:55:44.989: INFO: Waiting for pod pod-host-path-test to disappear Jan 27 11:55:44.997: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:55:44.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-brbk9" for this suite. Jan 27 11:55:51.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:55:51.164: INFO: namespace: e2e-tests-hostpath-brbk9, resource: bindings, ignored listing per whitelist Jan 27 11:55:51.218: INFO: namespace e2e-tests-hostpath-brbk9 deletion completed in 6.212209554s • [SLOW TEST:22.065 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:55:51.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 27 11:55:51.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-5cr7w" to be "success or failure" Jan 27 11:55:51.519: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.145053ms Jan 27 11:55:53.538: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121933473s Jan 27 11:55:55.572: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156032101s Jan 27 11:55:57.715: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299202719s Jan 27 11:55:59.730: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313887667s Jan 27 11:56:01.751: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.334733138s STEP: Saw pod success Jan 27 11:56:01.751: INFO: Pod "downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:56:01.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005 container client-container: STEP: delete the pod Jan 27 11:56:02.527: INFO: Waiting for pod downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005 to disappear Jan 27 11:56:02.875: INFO: Pod downwardapi-volume-f709e0b8-40fb-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:56:02.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5cr7w" for this suite. Jan 27 11:56:08.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:56:09.094: INFO: namespace: e2e-tests-downward-api-5cr7w, resource: bindings, ignored listing per whitelist Jan 27 11:56:09.139: INFO: namespace e2e-tests-downward-api-5cr7w deletion completed in 6.252151987s • [SLOW TEST:17.921 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:56:09.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0127 11:56:12.808188 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 11:56:12.808: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:56:12.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-87wnc" for this suite. Jan 27 11:56:19.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:56:19.172: INFO: namespace: e2e-tests-gc-87wnc, resource: bindings, ignored listing per whitelist Jan 27 11:56:19.214: INFO: namespace e2e-tests-gc-87wnc deletion completed in 6.401946125s • [SLOW TEST:10.075 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:56:19.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-ttgw STEP: Creating a pod to test atomic-volume-subpath Jan 27 11:56:19.430: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ttgw" in namespace "e2e-tests-subpath-4bdcg" to be "success or failure" Jan 27 11:56:19.522: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 91.448015ms Jan 27 11:56:21.891: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460462948s Jan 27 11:56:23.912: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481576678s Jan 27 11:56:26.358: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.927504396s Jan 27 11:56:28.376: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.945939759s Jan 27 11:56:30.400: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.970089934s Jan 27 11:56:32.434: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.004099138s Jan 27 11:56:34.448: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.017513814s Jan 27 11:56:36.521: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 17.090613456s Jan 27 11:56:38.604: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 19.173863392s Jan 27 11:56:40.698: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 21.267615823s Jan 27 11:56:42.717: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 23.286498755s Jan 27 11:56:44.732: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 25.30192604s Jan 27 11:56:46.745: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 27.314484536s Jan 27 11:56:48.763: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 29.332791016s Jan 27 11:56:50.782: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 31.35208154s Jan 27 11:56:52.815: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Running", Reason="", readiness=false. Elapsed: 33.385304696s Jan 27 11:56:54.827: INFO: Pod "pod-subpath-test-configmap-ttgw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.39702811s STEP: Saw pod success Jan 27 11:56:54.827: INFO: Pod "pod-subpath-test-configmap-ttgw" satisfied condition "success or failure" Jan 27 11:56:54.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-ttgw container test-container-subpath-configmap-ttgw: STEP: delete the pod Jan 27 11:56:55.007: INFO: Waiting for pod pod-subpath-test-configmap-ttgw to disappear Jan 27 11:56:55.047: INFO: Pod pod-subpath-test-configmap-ttgw no longer exists STEP: Deleting pod pod-subpath-test-configmap-ttgw Jan 27 11:56:55.047: INFO: Deleting pod "pod-subpath-test-configmap-ttgw" in namespace "e2e-tests-subpath-4bdcg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:56:55.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4bdcg" for this suite. Jan 27 11:57:01.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:57:01.981: INFO: namespace: e2e-tests-subpath-4bdcg, resource: bindings, ignored listing per whitelist Jan 27 11:57:02.233: INFO: namespace e2e-tests-subpath-4bdcg deletion completed in 7.152513694s • [SLOW TEST:43.019 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:57:02.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:57:02.452: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 27 11:57:02.521: INFO: Number of nodes with available pods: 0 Jan 27 11:57:02.521: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 27 11:57:02.791: INFO: Number of nodes with available pods: 0 Jan 27 11:57:02.791: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:03.801: INFO: Number of nodes with available pods: 0 Jan 27 11:57:03.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:04.808: INFO: Number of nodes with available pods: 0 Jan 27 11:57:04.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:05.812: INFO: Number of nodes with available pods: 0 Jan 27 11:57:05.812: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:06.817: INFO: Number of nodes with available pods: 0 Jan 27 11:57:06.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:09.147: INFO: Number of nodes with available pods: 0 Jan 27 11:57:09.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:09.803: INFO: Number of nodes with available pods: 0 Jan 27 11:57:09.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:10.831: INFO: Number of nodes with available pods: 0 Jan 27 11:57:10.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:11.810: INFO: Number of nodes with available pods: 0 Jan 27 11:57:11.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:12.805: INFO: Number of nodes with available pods: 1 Jan 27 11:57:12.805: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 27 11:57:12.925: INFO: Number of nodes with available pods: 1 Jan 27 11:57:12.925: INFO: Number of running nodes: 0, number of available pods: 1 Jan 27 11:57:13.942: INFO: Number of nodes with available pods: 0 Jan 27 11:57:13.942: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 27 11:57:14.071: INFO: Number of nodes with available pods: 0 Jan 27 11:57:14.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:15.084: INFO: Number of nodes with available pods: 0 Jan 27 11:57:15.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:16.084: INFO: Number of nodes with available pods: 0 Jan 27 11:57:16.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:17.083: INFO: Number of nodes with available pods: 0 Jan 27 11:57:17.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:18.218: INFO: Number of nodes with available pods: 0 Jan 27 11:57:18.218: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:19.087: INFO: Number of nodes with available pods: 0 Jan 27 11:57:19.087: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:20.083: INFO: Number of nodes with available pods: 0 Jan 27 11:57:20.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:21.092: INFO: Number of nodes with available pods: 0 Jan 27 11:57:21.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:22.081: INFO: Number of nodes with available pods: 0 Jan 27 11:57:22.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:23.109: INFO: Number of nodes with available pods: 0 Jan 27 11:57:23.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:24.194: INFO: Number of nodes with available pods: 0 Jan 27 11:57:24.194: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:25.085: INFO: Number of nodes with available pods: 0 Jan 27 11:57:25.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:26.090: INFO: Number of nodes with available pods: 0 Jan 27 11:57:26.090: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:27.086: INFO: Number of nodes with available pods: 0 Jan 27 11:57:27.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:28.631: INFO: Number of nodes with available pods: 0 Jan 27 11:57:28.631: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:29.083: INFO: Number of nodes with available pods: 0 Jan 27 11:57:29.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:30.082: INFO: Number of nodes with available pods: 0 Jan 27 11:57:30.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:31.089: INFO: Number of nodes with available pods: 0 Jan 27 11:57:31.089: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 27 11:57:32.088: INFO: Number of nodes with available pods: 1 Jan 27 11:57:32.088: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tv6qh, will wait for the garbage collector to delete the pods Jan 27 11:57:32.175: INFO: Deleting DaemonSet.extensions daemon-set took: 13.938236ms Jan 27 11:57:32.375: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.374381ms Jan 27 11:57:42.818: INFO: Number of nodes with available pods: 0 Jan 27 11:57:42.818: INFO: Number of running nodes: 0, number of available pods: 0 Jan 27 11:57:42.833: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tv6qh/daemonsets","resourceVersion":"19630769"},"items":null} Jan 27 11:57:42.847: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tv6qh/pods","resourceVersion":"19630770"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:57:42.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tv6qh" for this suite. Jan 27 11:57:50.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:57:51.069: INFO: namespace: e2e-tests-daemonsets-tv6qh, resource: bindings, ignored listing per whitelist Jan 27 11:57:51.163: INFO: namespace e2e-tests-daemonsets-tv6qh deletion completed in 8.217865697s • [SLOW TEST:48.929 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:57:51.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3e89e2b7-40fc-11ea-b652-0242ac110005 STEP: Creating a pod to test consume secrets Jan 27 11:57:51.353: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-mc8bb" to be "success or failure" Jan 27 11:57:51.366: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.681971ms Jan 27 11:57:53.417: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063269919s Jan 27 11:57:55.445: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091315757s Jan 27 11:57:57.519: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165129635s Jan 27 11:57:59.535: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181556394s Jan 27 11:58:01.562: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209060962s STEP: Saw pod success Jan 27 11:58:01.563: INFO: Pod "pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005" satisfied condition "success or failure" Jan 27 11:58:01.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 27 11:58:01.648: INFO: Waiting for pod pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005 to disappear Jan 27 11:58:01.763: INFO: Pod pod-projected-secrets-3e8ae986-40fc-11ea-b652-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:58:01.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mc8bb" for this suite. Jan 27 11:58:08.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:58:08.244: INFO: namespace: e2e-tests-projected-mc8bb, resource: bindings, ignored listing per whitelist Jan 27 11:58:08.300: INFO: namespace e2e-tests-projected-mc8bb deletion completed in 6.522485857s • [SLOW TEST:17.137 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:58:08.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0127 11:58:39.388770 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 27 11:58:39.388: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 27 11:58:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2j9gb" for this suite. Jan 27 11:58:50.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 27 11:58:51.107: INFO: namespace: e2e-tests-gc-2j9gb, resource: bindings, ignored listing per whitelist Jan 27 11:58:52.263: INFO: namespace e2e-tests-gc-2j9gb deletion completed in 12.870827026s • [SLOW TEST:43.963 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 27 11:58:52.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 27 11:58:52.790: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.160275ms)
Jan 27 11:58:52.849: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 58.167562ms)
Jan 27 11:58:52.861: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.409914ms)
Jan 27 11:58:52.867: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.189881ms)
Jan 27 11:58:52.871: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.869004ms)
Jan 27 11:58:52.876: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.353252ms)
Jan 27 11:58:52.882: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.063925ms)
Jan 27 11:58:52.886: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.491898ms)
Jan 27 11:58:52.891: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.824833ms)
Jan 27 11:58:52.895: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.256253ms)
Jan 27 11:58:52.899: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.117188ms)
Jan 27 11:58:52.904: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.199058ms)
Jan 27 11:58:52.908: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.040288ms)
Jan 27 11:58:52.912: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.096124ms)
Jan 27 11:58:52.916: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.808632ms)
Jan 27 11:58:52.920: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.892812ms)
Jan 27 11:58:52.923: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.757735ms)
Jan 27 11:58:52.927: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.09499ms)
Jan 27 11:58:52.932: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.453058ms)
Jan 27 11:58:52.936: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.341534ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:58:52.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-ggx67" for this suite.
Jan 27 11:58:58.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:58:59.070: INFO: namespace: e2e-tests-proxy-ggx67, resource: bindings, ignored listing per whitelist
Jan 27 11:58:59.117: INFO: namespace e2e-tests-proxy-ggx67 deletion completed in 6.17773702s

• [SLOW TEST:6.854 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:58:59.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 27 11:58:59.295: INFO: namespace e2e-tests-kubectl-wgv4n
Jan 27 11:58:59.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgv4n'
Jan 27 11:59:02.106: INFO: stderr: ""
Jan 27 11:59:02.106: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 27 11:59:03.798: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:03.798: INFO: Found 0 / 1
Jan 27 11:59:04.133: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:04.133: INFO: Found 0 / 1
Jan 27 11:59:05.135: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:05.135: INFO: Found 0 / 1
Jan 27 11:59:06.124: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:06.124: INFO: Found 0 / 1
Jan 27 11:59:07.609: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:07.609: INFO: Found 0 / 1
Jan 27 11:59:08.120: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:08.120: INFO: Found 0 / 1
Jan 27 11:59:09.286: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:09.287: INFO: Found 0 / 1
Jan 27 11:59:10.120: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:10.120: INFO: Found 0 / 1
Jan 27 11:59:11.130: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:11.130: INFO: Found 0 / 1
Jan 27 11:59:12.136: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:12.136: INFO: Found 1 / 1
Jan 27 11:59:12.136: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 27 11:59:12.145: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 11:59:12.145: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 27 11:59:12.145: INFO: wait on redis-master startup in e2e-tests-kubectl-wgv4n 
Jan 27 11:59:12.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4wqpb redis-master --namespace=e2e-tests-kubectl-wgv4n'
Jan 27 11:59:12.357: INFO: stderr: ""
Jan 27 11:59:12.357: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 11:59:10.098 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 11:59:10.098 # Server started, Redis version 3.2.12\n1:M 27 Jan 11:59:10.099 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 11:59:10.099 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 27 11:59:12.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-wgv4n'
Jan 27 11:59:12.652: INFO: stderr: ""
Jan 27 11:59:12.652: INFO: stdout: "service/rm2 exposed\n"
Jan 27 11:59:12.659: INFO: Service rm2 in namespace e2e-tests-kubectl-wgv4n found.
STEP: exposing service
Jan 27 11:59:14.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-wgv4n'
Jan 27 11:59:14.956: INFO: stderr: ""
Jan 27 11:59:14.956: INFO: stdout: "service/rm3 exposed\n"
Jan 27 11:59:15.099: INFO: Service rm3 in namespace e2e-tests-kubectl-wgv4n found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 11:59:17.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wgv4n" for this suite.
Jan 27 11:59:43.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 11:59:43.321: INFO: namespace: e2e-tests-kubectl-wgv4n, resource: bindings, ignored listing per whitelist
Jan 27 11:59:43.364: INFO: namespace e2e-tests-kubectl-wgv4n deletion completed in 26.221785261s

• [SLOW TEST:44.247 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 11:59:43.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 27 11:59:43.560: INFO: PodSpec: initContainers in spec.initContainers
Jan 27 12:00:50.255: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-816dadaf-40fc-11ea-b652-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-nxzms", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-nxzms/pods/pod-init-816dadaf-40fc-11ea-b652-0242ac110005", UID:"816ebb98-40fc-11ea-a994-fa163e34d433", ResourceVersion:"19631174", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715723183, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"560652673"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mxzs9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000b737c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mxzs9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mxzs9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mxzs9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00187f188), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001453c20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00187f200)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00187f220)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00187f228), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00187f22c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715723183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715723183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715723183, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715723183, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0011772a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004268c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000426930)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://809029d04e8327dcb8b2c63cd9fc9eda7f6f7b7aa3f5e096fa96f2b7ae3265b6"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011772e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011772c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:00:50.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-nxzms" for this suite.
Jan 27 12:01:12.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:01:12.559: INFO: namespace: e2e-tests-init-container-nxzms, resource: bindings, ignored listing per whitelist
Jan 27 12:01:12.675: INFO: namespace e2e-tests-init-container-nxzms deletion completed in 22.366903431s

• [SLOW TEST:89.311 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:01:12.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 12:01:12.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 27 12:01:13.096: INFO: stderr: ""
Jan 27 12:01:13.096: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:01:13.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-87t5f" for this suite.
Jan 27 12:01:19.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:01:19.261: INFO: namespace: e2e-tests-kubectl-87t5f, resource: bindings, ignored listing per whitelist
Jan 27 12:01:19.369: INFO: namespace e2e-tests-kubectl-87t5f deletion completed in 6.24300542s

• [SLOW TEST:6.693 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:01:19.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 12:01:19.761: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"baac66d4-40fc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00201f752), BlockOwnerDeletion:(*bool)(0xc00201f753)}}
Jan 27 12:01:19.842: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ba9b2f69-40fc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00201f952), BlockOwnerDeletion:(*bool)(0xc00201f953)}}
Jan 27 12:01:19.942: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"baaaf945-40fc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00231b99a), BlockOwnerDeletion:(*bool)(0xc00231b99b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:01:24.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bwsbx" for this suite.
Jan 27 12:01:31.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:01:31.141: INFO: namespace: e2e-tests-gc-bwsbx, resource: bindings, ignored listing per whitelist
Jan 27 12:01:31.167: INFO: namespace e2e-tests-gc-bwsbx deletion completed in 6.175741017s

• [SLOW TEST:11.798 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:01:31.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-c1b48f52-40fc-11ea-b652-0242ac110005
STEP: Creating secret with name s-test-opt-upd-c1b49054-40fc-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c1b48f52-40fc-11ea-b652-0242ac110005
STEP: Updating secret s-test-opt-upd-c1b49054-40fc-11ea-b652-0242ac110005
STEP: Creating secret with name s-test-opt-create-c1b49105-40fc-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:01:49.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-l6g48" for this suite.
Jan 27 12:02:13.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:02:14.016: INFO: namespace: e2e-tests-secrets-l6g48, resource: bindings, ignored listing per whitelist
Jan 27 12:02:14.125: INFO: namespace e2e-tests-secrets-l6g48 deletion completed in 24.222929042s

• [SLOW TEST:42.958 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:02:14.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 12:02:14.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-bz486'
Jan 27 12:02:14.469: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 12:02:14.470: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 27 12:02:16.525: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-s8n5g]
Jan 27 12:02:16.525: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-s8n5g" in namespace "e2e-tests-kubectl-bz486" to be "running and ready"
Jan 27 12:02:16.537: INFO: Pod "e2e-test-nginx-rc-s8n5g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.239862ms
Jan 27 12:02:18.571: INFO: Pod "e2e-test-nginx-rc-s8n5g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046342615s
Jan 27 12:02:20.617: INFO: Pod "e2e-test-nginx-rc-s8n5g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09260735s
Jan 27 12:02:22.648: INFO: Pod "e2e-test-nginx-rc-s8n5g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123049387s
Jan 27 12:02:24.658: INFO: Pod "e2e-test-nginx-rc-s8n5g": Phase="Running", Reason="", readiness=true. Elapsed: 8.132789488s
Jan 27 12:02:24.658: INFO: Pod "e2e-test-nginx-rc-s8n5g" satisfied condition "running and ready"
Jan 27 12:02:24.658: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-s8n5g]
Jan 27 12:02:24.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bz486'
Jan 27 12:02:24.866: INFO: stderr: ""
Jan 27 12:02:24.867: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 27 12:02:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bz486'
Jan 27 12:02:25.044: INFO: stderr: ""
Jan 27 12:02:25.044: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:02:25.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bz486" for this suite.
Jan 27 12:02:49.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:02:49.413: INFO: namespace: e2e-tests-kubectl-bz486, resource: bindings, ignored listing per whitelist
Jan 27 12:02:49.453: INFO: namespace e2e-tests-kubectl-bz486 deletion completed in 24.372883777s

• [SLOW TEST:35.328 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:02:49.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 27 12:02:49.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-5pjh8 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 27 12:03:01.595: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0127 12:02:59.813032    2640 log.go:172] (0xc00013ae70) (0xc0007f1900) Create stream\nI0127 12:02:59.813509    2640 log.go:172] (0xc00013ae70) (0xc0007f1900) Stream added, broadcasting: 1\nI0127 12:02:59.830972    2640 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0127 12:02:59.831019    2640 log.go:172] (0xc00013ae70) (0xc000538fa0) Create stream\nI0127 12:02:59.831032    2640 log.go:172] (0xc00013ae70) (0xc000538fa0) Stream added, broadcasting: 3\nI0127 12:02:59.832229    2640 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0127 12:02:59.832253    2640 log.go:172] (0xc00013ae70) (0xc000539720) Create stream\nI0127 12:02:59.832258    2640 log.go:172] (0xc00013ae70) (0xc000539720) Stream added, broadcasting: 5\nI0127 12:02:59.833519    2640 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0127 12:02:59.833545    2640 log.go:172] (0xc00013ae70) (0xc000539c20) Create stream\nI0127 12:02:59.833554    2640 log.go:172] (0xc00013ae70) (0xc000539c20) Stream added, broadcasting: 7\nI0127 12:02:59.836226    2640 log.go:172] (0xc00013ae70) Reply frame received for 7\nI0127 12:02:59.836849    2640 log.go:172] (0xc000538fa0) (3) Writing data frame\nI0127 12:02:59.837609    2640 log.go:172] (0xc000538fa0) (3) Writing data frame\nI0127 12:02:59.847930    2640 log.go:172] (0xc00013ae70) Data frame received for 5\nI0127 12:02:59.847980    2640 log.go:172] (0xc000539720) (5) Data frame handling\nI0127 12:02:59.847998    2640 log.go:172] (0xc000539720) (5) Data frame sent\nI0127 12:02:59.855911    2640 log.go:172] (0xc00013ae70) Data frame received for 5\nI0127 12:02:59.855938    2640 log.go:172] (0xc000539720) (5) Data frame handling\nI0127 12:02:59.855955    2640 log.go:172] (0xc000539720) (5) Data frame sent\nI0127 12:03:01.519774    2640 log.go:172] (0xc00013ae70) Data frame received for 1\nI0127 12:03:01.520037    2640 log.go:172] (0xc00013ae70) (0xc000538fa0) Stream removed, broadcasting: 3\nI0127 12:03:01.520194    2640 log.go:172] (0xc00013ae70) (0xc000539720) Stream removed, broadcasting: 5\nI0127 12:03:01.520256    2640 log.go:172] (0xc0007f1900) (1) Data frame handling\nI0127 12:03:01.520347    2640 log.go:172] (0xc00013ae70) (0xc000539c20) Stream removed, broadcasting: 7\nI0127 12:03:01.520447    2640 log.go:172] (0xc0007f1900) (1) Data frame sent\nI0127 12:03:01.520501    2640 log.go:172] (0xc00013ae70) (0xc0007f1900) Stream removed, broadcasting: 1\nI0127 12:03:01.520562    2640 log.go:172] (0xc00013ae70) Go away received\nI0127 12:03:01.520707    2640 log.go:172] (0xc00013ae70) (0xc0007f1900) Stream removed, broadcasting: 1\nI0127 12:03:01.520727    2640 log.go:172] (0xc00013ae70) (0xc000538fa0) Stream removed, broadcasting: 3\nI0127 12:03:01.520732    2640 log.go:172] (0xc00013ae70) (0xc000539720) Stream removed, broadcasting: 5\nI0127 12:03:01.520737    2640 log.go:172] (0xc00013ae70) (0xc000539c20) Stream removed, broadcasting: 7\n"
Jan 27 12:03:01.596: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:03:04.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5pjh8" for this suite.
Jan 27 12:03:10.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:03:10.647: INFO: namespace: e2e-tests-kubectl-5pjh8, resource: bindings, ignored listing per whitelist
Jan 27 12:03:10.762: INFO: namespace e2e-tests-kubectl-5pjh8 deletion completed in 6.615041887s

• [SLOW TEST:21.309 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:03:10.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-959zr
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-959zr
STEP: Deleting pre-stop pod
Jan 27 12:03:36.311: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:03:36.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-959zr" for this suite.
Jan 27 12:04:16.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:04:16.778: INFO: namespace: e2e-tests-prestop-959zr, resource: bindings, ignored listing per whitelist
Jan 27 12:04:16.863: INFO: namespace e2e-tests-prestop-959zr deletion completed in 40.383079932s

• [SLOW TEST:66.100 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:04:16.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-2470d599-40fd-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 12:04:17.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-5z9rr" to be "success or failure"
Jan 27 12:04:17.077: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.794615ms
Jan 27 12:04:19.093: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026531003s
Jan 27 12:04:21.107: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040825491s
Jan 27 12:04:23.172: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105698695s
Jan 27 12:04:25.331: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264689026s
Jan 27 12:04:27.392: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.325967969s
STEP: Saw pod success
Jan 27 12:04:27.392: INFO: Pod "pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:04:27.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 27 12:04:27.500: INFO: Waiting for pod pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:04:27.552: INFO: Pod pod-configmaps-2471de11-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:04:27.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5z9rr" for this suite.
Jan 27 12:04:35.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:04:35.701: INFO: namespace: e2e-tests-configmap-5z9rr, resource: bindings, ignored listing per whitelist
Jan 27 12:04:35.743: INFO: namespace e2e-tests-configmap-5z9rr deletion completed in 8.180558331s

• [SLOW TEST:18.880 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:04:35.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-djtk
STEP: Creating a pod to test atomic-volume-subpath
Jan 27 12:04:36.002: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-djtk" in namespace "e2e-tests-subpath-hmcjs" to be "success or failure"
Jan 27 12:04:36.024: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.947615ms
Jan 27 12:04:38.321: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319093591s
Jan 27 12:04:40.332: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329675115s
Jan 27 12:04:42.661: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659227257s
Jan 27 12:04:44.680: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.677742395s
Jan 27 12:04:46.688: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685361743s
Jan 27 12:04:48.787: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.784951991s
Jan 27 12:04:51.084: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.081977358s
Jan 27 12:04:53.099: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=true. Elapsed: 17.097263605s
Jan 27 12:04:55.118: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 19.116144317s
Jan 27 12:04:57.141: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 21.138541909s
Jan 27 12:04:59.153: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 23.150885026s
Jan 27 12:05:01.180: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 25.178080256s
Jan 27 12:05:03.202: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 27.199656242s
Jan 27 12:05:05.223: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 29.220803997s
Jan 27 12:05:07.239: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 31.236953816s
Jan 27 12:05:09.253: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 33.250531356s
Jan 27 12:05:11.268: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Running", Reason="", readiness=false. Elapsed: 35.266032779s
Jan 27 12:05:13.307: INFO: Pod "pod-subpath-test-downwardapi-djtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.304497633s
STEP: Saw pod success
Jan 27 12:05:13.307: INFO: Pod "pod-subpath-test-downwardapi-djtk" satisfied condition "success or failure"
Jan 27 12:05:13.313: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-djtk container test-container-subpath-downwardapi-djtk: 
STEP: delete the pod
Jan 27 12:05:13.910: INFO: Waiting for pod pod-subpath-test-downwardapi-djtk to disappear
Jan 27 12:05:13.924: INFO: Pod pod-subpath-test-downwardapi-djtk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-djtk
Jan 27 12:05:13.924: INFO: Deleting pod "pod-subpath-test-downwardapi-djtk" in namespace "e2e-tests-subpath-hmcjs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:05:13.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hmcjs" for this suite.
Jan 27 12:05:22.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:05:22.384: INFO: namespace: e2e-tests-subpath-hmcjs, resource: bindings, ignored listing per whitelist
Jan 27 12:05:22.599: INFO: namespace e2e-tests-subpath-hmcjs deletion completed in 8.622753129s

• [SLOW TEST:46.856 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:05:22.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-4b9cf0dc-40fd-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:05:22.966: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-l2jv7" to be "success or failure"
Jan 27 12:05:22.990: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.162233ms
Jan 27 12:05:25.296: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330163796s
Jan 27 12:05:27.347: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381196132s
Jan 27 12:05:29.504: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537786355s
Jan 27 12:05:31.545: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.57844747s
Jan 27 12:05:33.556: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589619491s
STEP: Saw pod success
Jan 27 12:05:33.556: INFO: Pod "pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:05:33.559: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 12:05:34.673: INFO: Waiting for pod pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:05:34.696: INFO: Pod pod-projected-secrets-4b9f4a98-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:05:34.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l2jv7" for this suite.
Jan 27 12:05:40.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:05:40.963: INFO: namespace: e2e-tests-projected-l2jv7, resource: bindings, ignored listing per whitelist
Jan 27 12:05:41.056: INFO: namespace e2e-tests-projected-l2jv7 deletion completed in 6.215620104s

• [SLOW TEST:18.456 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:05:41.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 27 12:06:01.490: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:01.526: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:03.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:03.544: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:05.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:05.555: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:07.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:07.550: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:09.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:09.543: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:11.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:11.555: INFO: Pod pod-with-prestop-http-hook still exists
Jan 27 12:06:13.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 27 12:06:13.545: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:06:13.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tsxxj" for this suite.
Jan 27 12:06:37.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:06:37.648: INFO: namespace: e2e-tests-container-lifecycle-hook-tsxxj, resource: bindings, ignored listing per whitelist
Jan 27 12:06:37.762: INFO: namespace e2e-tests-container-lifecycle-hook-tsxxj deletion completed in 24.182640177s

• [SLOW TEST:56.707 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:06:37.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vpvd2 in namespace e2e-tests-proxy-jgm8v
I0127 12:06:37.992513       8 runners.go:184] Created replication controller with name: proxy-service-vpvd2, namespace: e2e-tests-proxy-jgm8v, replica count: 1
I0127 12:06:39.043153       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:40.043660       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:41.044208       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:42.044563       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:43.044851       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:44.045132       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:45.045603       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:46.046230       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:06:47.046588       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0127 12:06:48.046919       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0127 12:06:49.047257       8 runners.go:184] proxy-service-vpvd2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 27 12:06:49.643: INFO: setup took 11.707882813s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 27 12:06:49.678: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jgm8v/pods/proxy-service-vpvd2-4vl9x:160/proxy/: foo (200; 33.782517ms)
Jan 27 12:06:49.678: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jgm8v/pods/http:proxy-service-vpvd2-4vl9x:160/proxy/: foo (200; 34.037255ms)
Jan 27 12:06:49.678: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-jgm8v/pods/proxy-service-vpvd2-4vl9x/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-88c7e152-40fd-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:07:05.414: INFO: Waiting up to 5m0s for pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-5zfkr" to be "success or failure"
Jan 27 12:07:05.443: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.803471ms
Jan 27 12:07:07.464: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049765059s
Jan 27 12:07:09.493: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078184986s
Jan 27 12:07:11.944: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.529824095s
Jan 27 12:07:13.972: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557534239s
Jan 27 12:07:15.995: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.581103172s
STEP: Saw pod success
Jan 27 12:07:15.996: INFO: Pod "pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:07:16.005: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 27 12:07:16.617: INFO: Waiting for pod pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:07:16.667: INFO: Pod pod-secrets-88c9aa57-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:07:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5zfkr" for this suite.
Jan 27 12:07:23.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:07:23.077: INFO: namespace: e2e-tests-secrets-5zfkr, resource: bindings, ignored listing per whitelist
Jan 27 12:07:23.265: INFO: namespace e2e-tests-secrets-5zfkr deletion completed in 6.592848913s

• [SLOW TEST:18.080 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:07:23.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 12:07:23.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-kj58d" to be "success or failure"
Jan 27 12:07:23.487: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.870511ms
Jan 27 12:07:25.553: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086780988s
Jan 27 12:07:27.570: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104084888s
Jan 27 12:07:29.595: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128148163s
Jan 27 12:07:31.790: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323991257s
Jan 27 12:07:33.834: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.368043122s
STEP: Saw pod success
Jan 27 12:07:33.835: INFO: Pod "downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:07:33.856: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 12:07:34.077: INFO: Waiting for pod downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:07:34.095: INFO: Pod downwardapi-volume-9383aabb-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:07:34.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kj58d" for this suite.
Jan 27 12:07:40.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:07:41.025: INFO: namespace: e2e-tests-downward-api-kj58d, resource: bindings, ignored listing per whitelist
Jan 27 12:07:41.115: INFO: namespace e2e-tests-downward-api-kj58d deletion completed in 7.012746069s

• [SLOW TEST:17.850 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:07:41.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 27 12:07:41.352: INFO: Waiting up to 5m0s for pod "pod-9e340715-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-k5qqx" to be "success or failure"
Jan 27 12:07:41.368: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.096808ms
Jan 27 12:07:43.931: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578539269s
Jan 27 12:07:45.954: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60131589s
Jan 27 12:07:48.592: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.239451706s
Jan 27 12:07:50.663: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.310192484s
Jan 27 12:07:52.680: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.327592817s
STEP: Saw pod success
Jan 27 12:07:52.680: INFO: Pod "pod-9e340715-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:07:52.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9e340715-40fd-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 12:07:52.783: INFO: Waiting for pod pod-9e340715-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:07:52.801: INFO: Pod pod-9e340715-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:07:52.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k5qqx" for this suite.
Jan 27 12:07:59.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:07:59.070: INFO: namespace: e2e-tests-emptydir-k5qqx, resource: bindings, ignored listing per whitelist
Jan 27 12:07:59.334: INFO: namespace e2e-tests-emptydir-k5qqx deletion completed in 6.466272577s

• [SLOW TEST:18.219 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:07:59.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-a908b95a-40fd-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:07:59.568: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-pb7bc" to be "success or failure"
Jan 27 12:07:59.591: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.745275ms
Jan 27 12:08:01.609: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04136485s
Jan 27 12:08:03.628: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060309782s
Jan 27 12:08:05.663: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095043515s
Jan 27 12:08:07.672: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104093668s
Jan 27 12:08:09.688: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120324153s
STEP: Saw pod success
Jan 27 12:08:09.688: INFO: Pod "pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:08:09.694: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 27 12:08:10.504: INFO: Waiting for pod pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:08:10.626: INFO: Pod pod-projected-secrets-a9106505-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:08:10.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pb7bc" for this suite.
Jan 27 12:08:16.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:08:16.872: INFO: namespace: e2e-tests-projected-pb7bc, resource: bindings, ignored listing per whitelist
Jan 27 12:08:16.957: INFO: namespace e2e-tests-projected-pb7bc deletion completed in 6.286085157s

• [SLOW TEST:17.623 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:08:16.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 12:08:17.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-v7f8v" to be "success or failure"
Jan 27 12:08:17.405: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.746011ms
Jan 27 12:08:19.515: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212627058s
Jan 27 12:08:21.535: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2322054s
Jan 27 12:08:24.005: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702734214s
Jan 27 12:08:26.018: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715711302s
Jan 27 12:08:28.030: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.72750361s
STEP: Saw pod success
Jan 27 12:08:28.030: INFO: Pod "downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:08:28.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 12:08:29.508: INFO: Waiting for pod downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:08:29.941: INFO: Pod downwardapi-volume-b39feef5-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:08:29.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v7f8v" for this suite.
Jan 27 12:08:36.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:08:36.124: INFO: namespace: e2e-tests-projected-v7f8v, resource: bindings, ignored listing per whitelist
Jan 27 12:08:36.184: INFO: namespace e2e-tests-projected-v7f8v deletion completed in 6.222448575s

• [SLOW TEST:19.226 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:08:36.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 27 12:08:36.385: INFO: Waiting up to 5m0s for pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-n48l5" to be "success or failure"
Jan 27 12:08:36.418: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.088164ms
Jan 27 12:08:38.492: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106761398s
Jan 27 12:08:40.519: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134116274s
Jan 27 12:08:42.554: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168819192s
Jan 27 12:08:44.698: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312827824s
Jan 27 12:08:46.732: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346900336s
STEP: Saw pod success
Jan 27 12:08:46.732: INFO: Pod "downward-api-befdab8c-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:08:46.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-befdab8c-40fd-11ea-b652-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 27 12:08:46.968: INFO: Waiting for pod downward-api-befdab8c-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:08:46.987: INFO: Pod downward-api-befdab8c-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:08:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n48l5" for this suite.
Jan 27 12:08:53.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:08:53.195: INFO: namespace: e2e-tests-downward-api-n48l5, resource: bindings, ignored listing per whitelist
Jan 27 12:08:53.316: INFO: namespace e2e-tests-downward-api-n48l5 deletion completed in 6.285425403s

• [SLOW TEST:17.132 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:08:53.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 12:09:25.691: INFO: Container started at 2020-01-27 12:09:02 +0000 UTC, pod became ready at 2020-01-27 12:09:24 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:09:25.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-p5gw2" for this suite.
Jan 27 12:09:43.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:09:43.957: INFO: namespace: e2e-tests-container-probe-p5gw2, resource: bindings, ignored listing per whitelist
Jan 27 12:09:44.069: INFO: namespace e2e-tests-container-probe-p5gw2 deletion completed in 18.35605676s

• [SLOW TEST:50.752 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:09:44.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 27 12:09:44.303: INFO: Waiting up to 5m0s for pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-vzjjr" to be "success or failure"
Jan 27 12:09:44.307: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992881ms
Jan 27 12:09:46.324: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021001759s
Jan 27 12:09:48.338: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034953292s
Jan 27 12:09:50.718: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414932276s
Jan 27 12:09:52.856: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553287864s
Jan 27 12:09:54.916: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.613123443s
Jan 27 12:09:56.934: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.630864454s
STEP: Saw pod success
Jan 27 12:09:56.934: INFO: Pod "downward-api-e77cbf90-40fd-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:09:56.940: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e77cbf90-40fd-11ea-b652-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 27 12:09:57.076: INFO: Waiting for pod downward-api-e77cbf90-40fd-11ea-b652-0242ac110005 to disappear
Jan 27 12:09:57.090: INFO: Pod downward-api-e77cbf90-40fd-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:09:57.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vzjjr" for this suite.
Jan 27 12:10:03.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:10:03.389: INFO: namespace: e2e-tests-downward-api-vzjjr, resource: bindings, ignored listing per whitelist
Jan 27 12:10:03.417: INFO: namespace e2e-tests-downward-api-vzjjr deletion completed in 6.318313842s

• [SLOW TEST:19.348 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:10:03.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 27 12:10:03.691: INFO: Number of nodes with available pods: 0
Jan 27 12:10:03.691: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:04.716: INFO: Number of nodes with available pods: 0
Jan 27 12:10:04.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:05.870: INFO: Number of nodes with available pods: 0
Jan 27 12:10:05.870: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:06.735: INFO: Number of nodes with available pods: 0
Jan 27 12:10:06.735: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:07.718: INFO: Number of nodes with available pods: 0
Jan 27 12:10:07.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:09.601: INFO: Number of nodes with available pods: 0
Jan 27 12:10:09.601: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:09.712: INFO: Number of nodes with available pods: 0
Jan 27 12:10:09.712: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:10.726: INFO: Number of nodes with available pods: 0
Jan 27 12:10:10.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:11.731: INFO: Number of nodes with available pods: 0
Jan 27 12:10:11.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 27 12:10:12.722: INFO: Number of nodes with available pods: 1
Jan 27 12:10:12.722: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 27 12:10:12.783: INFO: Number of nodes with available pods: 1
Jan 27 12:10:12.783: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vhl8t, will wait for the garbage collector to delete the pods
Jan 27 12:10:14.694: INFO: Deleting DaemonSet.extensions daemon-set took: 304.883124ms
Jan 27 12:10:14.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.008841ms
Jan 27 12:10:20.553: INFO: Number of nodes with available pods: 0
Jan 27 12:10:20.553: INFO: Number of running nodes: 0, number of available pods: 0
Jan 27 12:10:20.574: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vhl8t/daemonsets","resourceVersion":"19632458"},"items":null}

Jan 27 12:10:20.587: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vhl8t/pods","resourceVersion":"19632458"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:10:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vhl8t" for this suite.
Jan 27 12:10:26.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:10:26.946: INFO: namespace: e2e-tests-daemonsets-vhl8t, resource: bindings, ignored listing per whitelist
Jan 27 12:10:26.968: INFO: namespace e2e-tests-daemonsets-vhl8t deletion completed in 6.199185122s

• [SLOW TEST:23.550 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:10:26.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 12:10:27.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-5fz9p" to be "success or failure"
Jan 27 12:10:27.164: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.671613ms
Jan 27 12:10:29.537: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390856663s
Jan 27 12:10:31.550: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403439165s
Jan 27 12:10:33.595: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449266562s
Jan 27 12:10:35.627: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481171244s
Jan 27 12:10:37.649: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.503112729s
STEP: Saw pod success
Jan 27 12:10:37.650: INFO: Pod "downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:10:37.659: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 12:10:37.734: INFO: Waiting for pod downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005 to disappear
Jan 27 12:10:37.837: INFO: Pod downwardapi-volume-01074ec4-40fe-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:10:37.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5fz9p" for this suite.
Jan 27 12:10:43.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:10:44.212: INFO: namespace: e2e-tests-projected-5fz9p, resource: bindings, ignored listing per whitelist
Jan 27 12:10:44.227: INFO: namespace e2e-tests-projected-5fz9p deletion completed in 6.36898543s

• [SLOW TEST:17.259 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:10:44.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-94cwh
Jan 27 12:10:54.465: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-94cwh
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 12:10:54.479: INFO: Initial restart count of pod liveness-http is 0
Jan 27 12:11:12.880: INFO: Restart count of pod e2e-tests-container-probe-94cwh/liveness-http is now 1 (18.400582591s elapsed)
Jan 27 12:11:33.493: INFO: Restart count of pod e2e-tests-container-probe-94cwh/liveness-http is now 2 (39.013637337s elapsed)
Jan 27 12:11:53.749: INFO: Restart count of pod e2e-tests-container-probe-94cwh/liveness-http is now 3 (59.269999702s elapsed)
Jan 27 12:12:12.233: INFO: Restart count of pod e2e-tests-container-probe-94cwh/liveness-http is now 4 (1m17.754037019s elapsed)
Jan 27 12:13:12.843: INFO: Restart count of pod e2e-tests-container-probe-94cwh/liveness-http is now 5 (2m18.364133996s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:13:12.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-94cwh" for this suite.
Jan 27 12:13:19.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:13:19.172: INFO: namespace: e2e-tests-container-probe-94cwh, resource: bindings, ignored listing per whitelist
Jan 27 12:13:19.347: INFO: namespace e2e-tests-container-probe-94cwh deletion completed in 6.433595405s

• [SLOW TEST:155.119 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:13:19.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-67cb2e89-40fe-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-67cb2e89-40fe-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:14:37.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2gl9v" for this suite.
Jan 27 12:15:01.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:15:01.667: INFO: namespace: e2e-tests-projected-2gl9v, resource: bindings, ignored listing per whitelist
Jan 27 12:15:01.741: INFO: namespace e2e-tests-projected-2gl9v deletion completed in 24.272246312s

• [SLOW TEST:102.394 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:15:01.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lb4jh
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 27 12:15:02.062: INFO: Found 0 stateful pods, waiting for 3
Jan 27 12:15:12.083: INFO: Found 1 stateful pods, waiting for 3
Jan 27 12:15:22.072: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:22.072: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:22.072: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 27 12:15:32.075: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:32.075: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:32.075: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 27 12:15:42.080: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:42.080: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:15:42.080: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 27 12:15:42.172: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 27 12:15:52.282: INFO: Updating stateful set ss2
Jan 27 12:15:52.296: INFO: Waiting for Pod e2e-tests-statefulset-lb4jh/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 27 12:16:02.914: INFO: Found 2 stateful pods, waiting for 3
Jan 27 12:16:12.932: INFO: Found 2 stateful pods, waiting for 3
Jan 27 12:16:22.932: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:16:22.932: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:16:22.932: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 27 12:16:32.952: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:16:32.952: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:16:32.952: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 27 12:16:33.038: INFO: Updating stateful set ss2
Jan 27 12:16:33.119: INFO: Waiting for Pod e2e-tests-statefulset-lb4jh/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 27 12:16:44.301: INFO: Updating stateful set ss2
Jan 27 12:16:44.323: INFO: Waiting for StatefulSet e2e-tests-statefulset-lb4jh/ss2 to complete update
Jan 27 12:16:44.323: INFO: Waiting for Pod e2e-tests-statefulset-lb4jh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 27 12:16:54.553: INFO: Waiting for StatefulSet e2e-tests-statefulset-lb4jh/ss2 to complete update
Jan 27 12:16:54.553: INFO: Waiting for Pod e2e-tests-statefulset-lb4jh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 27 12:17:06.221: INFO: Waiting for StatefulSet e2e-tests-statefulset-lb4jh/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 27 12:17:14.358: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lb4jh
Jan 27 12:17:14.374: INFO: Scaling statefulset ss2 to 0
Jan 27 12:17:44.469: INFO: Waiting for statefulset status.replicas updated to 0
Jan 27 12:17:44.485: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:17:44.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lb4jh" for this suite.
Jan 27 12:17:52.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:17:52.735: INFO: namespace: e2e-tests-statefulset-lb4jh, resource: bindings, ignored listing per whitelist
Jan 27 12:17:52.803: INFO: namespace e2e-tests-statefulset-lb4jh deletion completed in 8.181310741s

• [SLOW TEST:171.061 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:17:52.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0ac7898b-40ff-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 12:17:53.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-sc57c" to be "success or failure"
Jan 27 12:17:53.015: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.183364ms
Jan 27 12:17:55.038: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029792385s
Jan 27 12:17:57.047: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039153845s
Jan 27 12:17:59.443: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435160651s
Jan 27 12:18:01.455: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446830159s
Jan 27 12:18:03.467: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.458725698s
STEP: Saw pod success
Jan 27 12:18:03.467: INFO: Pod "pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:18:03.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 27 12:18:04.322: INFO: Waiting for pod pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005 to disappear
Jan 27 12:18:04.368: INFO: Pod pod-configmaps-0ac88ea5-40ff-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:18:04.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sc57c" for this suite.
Jan 27 12:18:12.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:18:12.722: INFO: namespace: e2e-tests-configmap-sc57c, resource: bindings, ignored listing per whitelist
Jan 27 12:18:12.908: INFO: namespace e2e-tests-configmap-sc57c deletion completed in 8.526609576s

• [SLOW TEST:20.105 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:18:12.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 27 12:18:13.215: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5gcdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-5gcdj/configmaps/e2e-watch-test-resource-version,UID:16c2ab41-40ff-11ea-a994-fa163e34d433,ResourceVersion:19633418,Generation:0,CreationTimestamp:2020-01-27 12:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 12:18:13.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5gcdj,SelfLink:/api/v1/namespaces/e2e-tests-watch-5gcdj/configmaps/e2e-watch-test-resource-version,UID:16c2ab41-40ff-11ea-a994-fa163e34d433,ResourceVersion:19633419,Generation:0,CreationTimestamp:2020-01-27 12:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:18:13.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-5gcdj" for this suite.
Jan 27 12:18:19.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:18:19.534: INFO: namespace: e2e-tests-watch-5gcdj, resource: bindings, ignored listing per whitelist
Jan 27 12:18:19.538: INFO: namespace e2e-tests-watch-5gcdj deletion completed in 6.316409278s

• [SLOW TEST:6.630 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:18:19.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 27 12:21:24.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:24.120: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:26.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:26.144: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:28.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:28.138: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:30.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:30.182: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:32.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:32.145: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:34.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:34.135: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:36.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:36.136: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:38.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:38.143: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:40.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:40.136: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:42.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:42.432: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:44.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:44.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:46.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:46.134: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 27 12:21:48.121: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 27 12:21:48.141: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:21:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mktmw" for this suite.
Jan 27 12:22:12.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:22:12.364: INFO: namespace: e2e-tests-container-lifecycle-hook-mktmw, resource: bindings, ignored listing per whitelist
Jan 27 12:22:12.412: INFO: namespace e2e-tests-container-lifecycle-hook-mktmw deletion completed in 24.263487429s

• [SLOW TEST:232.874 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:22:12.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 27 12:22:12.722: INFO: Waiting up to 5m0s for pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005" in namespace "e2e-tests-var-expansion-czhlz" to be "success or failure"
Jan 27 12:22:12.748: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.695386ms
Jan 27 12:22:14.765: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043138293s
Jan 27 12:22:16.787: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065302083s
Jan 27 12:22:18.801: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079050159s
Jan 27 12:22:20.830: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107571288s
Jan 27 12:22:22.861: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138855587s
STEP: Saw pod success
Jan 27 12:22:22.861: INFO: Pod "var-expansion-a5906651-40ff-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:22:22.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a5906651-40ff-11ea-b652-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 27 12:22:23.179: INFO: Waiting for pod var-expansion-a5906651-40ff-11ea-b652-0242ac110005 to disappear
Jan 27 12:22:23.403: INFO: Pod var-expansion-a5906651-40ff-11ea-b652-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:22:23.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-czhlz" for this suite.
Jan 27 12:22:30.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:22:31.010: INFO: namespace: e2e-tests-var-expansion-czhlz, resource: bindings, ignored listing per whitelist
Jan 27 12:22:31.143: INFO: namespace e2e-tests-var-expansion-czhlz deletion completed in 7.648069491s

• [SLOW TEST:18.730 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:22:31.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:22:42.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ws8vm" for this suite.
Jan 27 12:22:48.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:22:48.739: INFO: namespace: e2e-tests-emptydir-wrapper-ws8vm, resource: bindings, ignored listing per whitelist
Jan 27 12:22:48.812: INFO: namespace e2e-tests-emptydir-wrapper-ws8vm deletion completed in 6.373814787s

• [SLOW TEST:17.669 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:22:48.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 27 12:22:49.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:22:51.260: INFO: stderr: ""
Jan 27 12:22:51.260: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 12:22:51.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:22:51.546: INFO: stderr: ""
Jan 27 12:22:51.546: INFO: stdout: "update-demo-nautilus-7ddgh update-demo-nautilus-8wck8 "
Jan 27 12:22:51.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:22:51.743: INFO: stderr: ""
Jan 27 12:22:51.744: INFO: stdout: ""
Jan 27 12:22:51.744: INFO: update-demo-nautilus-7ddgh is created but not running
Jan 27 12:22:56.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:22:56.876: INFO: stderr: ""
Jan 27 12:22:56.876: INFO: stdout: "update-demo-nautilus-7ddgh update-demo-nautilus-8wck8 "
Jan 27 12:22:56.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:22:57.037: INFO: stderr: ""
Jan 27 12:22:57.037: INFO: stdout: ""
Jan 27 12:22:57.037: INFO: update-demo-nautilus-7ddgh is created but not running
Jan 27 12:23:02.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:02.198: INFO: stderr: ""
Jan 27 12:23:02.199: INFO: stdout: "update-demo-nautilus-7ddgh update-demo-nautilus-8wck8 "
Jan 27 12:23:02.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:02.346: INFO: stderr: ""
Jan 27 12:23:02.346: INFO: stdout: "true"
Jan 27 12:23:02.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:02.534: INFO: stderr: ""
Jan 27 12:23:02.534: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 12:23:02.534: INFO: validating pod update-demo-nautilus-7ddgh
Jan 27 12:23:02.565: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 12:23:02.565: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 12:23:02.565: INFO: update-demo-nautilus-7ddgh is verified up and running
Jan 27 12:23:02.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wck8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:02.671: INFO: stderr: ""
Jan 27 12:23:02.671: INFO: stdout: ""
Jan 27 12:23:02.671: INFO: update-demo-nautilus-8wck8 is created but not running
Jan 27 12:23:07.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:07.796: INFO: stderr: ""
Jan 27 12:23:07.796: INFO: stdout: "update-demo-nautilus-7ddgh update-demo-nautilus-8wck8 "
Jan 27 12:23:07.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:07.967: INFO: stderr: ""
Jan 27 12:23:07.967: INFO: stdout: "true"
Jan 27 12:23:07.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ddgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:08.087: INFO: stderr: ""
Jan 27 12:23:08.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 12:23:08.087: INFO: validating pod update-demo-nautilus-7ddgh
Jan 27 12:23:08.095: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 12:23:08.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 12:23:08.095: INFO: update-demo-nautilus-7ddgh is verified up and running
Jan 27 12:23:08.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wck8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:08.178: INFO: stderr: ""
Jan 27 12:23:08.178: INFO: stdout: "true"
Jan 27 12:23:08.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wck8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:08.297: INFO: stderr: ""
Jan 27 12:23:08.297: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 27 12:23:08.297: INFO: validating pod update-demo-nautilus-8wck8
Jan 27 12:23:08.311: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 27 12:23:08.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 27 12:23:08.311: INFO: update-demo-nautilus-8wck8 is verified up and running
STEP: rolling-update to new replication controller
Jan 27 12:23:08.315: INFO: scanned /root for discovery docs: 
Jan 27 12:23:08.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:43.266: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 27 12:23:43.266: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 27 12:23:43.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:43.469: INFO: stderr: ""
Jan 27 12:23:43.469: INFO: stdout: "update-demo-kitten-gd9h7 update-demo-kitten-psnmq "
Jan 27 12:23:43.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gd9h7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:43.653: INFO: stderr: ""
Jan 27 12:23:43.653: INFO: stdout: "true"
Jan 27 12:23:43.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gd9h7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:43.746: INFO: stderr: ""
Jan 27 12:23:43.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 27 12:23:43.746: INFO: validating pod update-demo-kitten-gd9h7
Jan 27 12:23:43.783: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 27 12:23:43.783: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 27 12:23:43.783: INFO: update-demo-kitten-gd9h7 is verified up and running
Jan 27 12:23:43.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psnmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:43.934: INFO: stderr: ""
Jan 27 12:23:43.934: INFO: stdout: "true"
Jan 27 12:23:43.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-psnmq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh64w'
Jan 27 12:23:44.140: INFO: stderr: ""
Jan 27 12:23:44.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 27 12:23:44.140: INFO: validating pod update-demo-kitten-psnmq
Jan 27 12:23:44.170: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 27 12:23:44.170: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 27 12:23:44.170: INFO: update-demo-kitten-psnmq is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:23:44.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fh64w" for this suite.
Jan 27 12:24:10.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:24:10.394: INFO: namespace: e2e-tests-kubectl-fh64w, resource: bindings, ignored listing per whitelist
Jan 27 12:24:10.423: INFO: namespace e2e-tests-kubectl-fh64w deletion completed in 26.241267715s

• [SLOW TEST:81.610 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:24:10.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:24:22.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-b7mxf" for this suite.
Jan 27 12:24:28.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:24:29.008: INFO: namespace: e2e-tests-kubelet-test-b7mxf, resource: bindings, ignored listing per whitelist
Jan 27 12:24:29.094: INFO: namespace e2e-tests-kubelet-test-b7mxf deletion completed in 6.301821907s

• [SLOW TEST:18.670 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:24:29.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-rg8sf
I0127 12:24:29.269150       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-rg8sf, replica count: 1
I0127 12:24:30.319844       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:31.320150       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:32.320425       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:33.320680       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:34.321092       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:35.321493       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:36.322031       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:37.322505       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:38.322868       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0127 12:24:39.323229       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 27 12:24:39.516: INFO: Created: latency-svc-xfntr
Jan 27 12:24:39.580: INFO: Got endpoints: latency-svc-xfntr [156.869595ms]
Jan 27 12:24:39.653: INFO: Created: latency-svc-cn248
Jan 27 12:24:39.796: INFO: Got endpoints: latency-svc-cn248 [215.435926ms]
Jan 27 12:24:39.832: INFO: Created: latency-svc-hr674
Jan 27 12:24:39.957: INFO: Got endpoints: latency-svc-hr674 [375.947084ms]
Jan 27 12:24:39.997: INFO: Created: latency-svc-qf2ck
Jan 27 12:24:40.000: INFO: Got endpoints: latency-svc-qf2ck [419.283646ms]
Jan 27 12:24:40.256: INFO: Created: latency-svc-h4rqr
Jan 27 12:24:40.277: INFO: Got endpoints: latency-svc-h4rqr [696.000651ms]
Jan 27 12:24:40.508: INFO: Created: latency-svc-q7kdj
Jan 27 12:24:40.514: INFO: Got endpoints: latency-svc-q7kdj [932.601872ms]
Jan 27 12:24:40.695: INFO: Created: latency-svc-q7bns
Jan 27 12:24:40.704: INFO: Got endpoints: latency-svc-q7bns [1.12287913s]
Jan 27 12:24:40.770: INFO: Created: latency-svc-d2hp4
Jan 27 12:24:40.925: INFO: Got endpoints: latency-svc-d2hp4 [1.342991835s]
Jan 27 12:24:40.941: INFO: Created: latency-svc-9tdrr
Jan 27 12:24:40.951: INFO: Got endpoints: latency-svc-9tdrr [1.370361967s]
Jan 27 12:24:41.255: INFO: Created: latency-svc-lxfnc
Jan 27 12:24:41.298: INFO: Got endpoints: latency-svc-lxfnc [1.716560855s]
Jan 27 12:24:41.534: INFO: Created: latency-svc-q48zf
Jan 27 12:24:41.545: INFO: Got endpoints: latency-svc-q48zf [1.963634242s]
Jan 27 12:24:41.742: INFO: Created: latency-svc-bpljw
Jan 27 12:24:42.041: INFO: Got endpoints: latency-svc-bpljw [2.459302326s]
Jan 27 12:24:42.059: INFO: Created: latency-svc-ssvhz
Jan 27 12:24:42.083: INFO: Got endpoints: latency-svc-ssvhz [2.501332906s]
Jan 27 12:24:42.358: INFO: Created: latency-svc-l5z74
Jan 27 12:24:42.386: INFO: Got endpoints: latency-svc-l5z74 [344.735045ms]
Jan 27 12:24:42.435: INFO: Created: latency-svc-2vjvv
Jan 27 12:24:42.556: INFO: Got endpoints: latency-svc-2vjvv [2.974402086s]
Jan 27 12:24:42.605: INFO: Created: latency-svc-2gwnh
Jan 27 12:24:42.634: INFO: Got endpoints: latency-svc-2gwnh [3.052216335s]
Jan 27 12:24:42.862: INFO: Created: latency-svc-zh78q
Jan 27 12:24:43.151: INFO: Got endpoints: latency-svc-zh78q [3.569172498s]
Jan 27 12:24:43.199: INFO: Created: latency-svc-4s2tw
Jan 27 12:24:43.552: INFO: Got endpoints: latency-svc-4s2tw [3.755581292s]
Jan 27 12:24:43.589: INFO: Created: latency-svc-ll9cq
Jan 27 12:24:43.639: INFO: Got endpoints: latency-svc-ll9cq [3.682174847s]
Jan 27 12:24:43.836: INFO: Created: latency-svc-tpl8z
Jan 27 12:24:44.061: INFO: Got endpoints: latency-svc-tpl8z [4.060843067s]
Jan 27 12:24:44.132: INFO: Created: latency-svc-2txwc
Jan 27 12:24:44.161: INFO: Got endpoints: latency-svc-2txwc [3.883206451s]
Jan 27 12:24:44.660: INFO: Created: latency-svc-c7lj9
Jan 27 12:24:44.671: INFO: Got endpoints: latency-svc-c7lj9 [4.156301413s]
Jan 27 12:24:44.745: INFO: Created: latency-svc-ls2zf
Jan 27 12:24:44.745: INFO: Got endpoints: latency-svc-ls2zf [4.040787873s]
Jan 27 12:24:44.853: INFO: Created: latency-svc-b8vhg
Jan 27 12:24:44.859: INFO: Got endpoints: latency-svc-b8vhg [3.934750917s]
Jan 27 12:24:44.916: INFO: Created: latency-svc-v94nl
Jan 27 12:24:44.920: INFO: Got endpoints: latency-svc-v94nl [3.968957965s]
Jan 27 12:24:45.077: INFO: Created: latency-svc-cnt7l
Jan 27 12:24:45.111: INFO: Got endpoints: latency-svc-cnt7l [3.81269847s]
Jan 27 12:24:45.133: INFO: Created: latency-svc-mhhgg
Jan 27 12:24:45.251: INFO: Got endpoints: latency-svc-mhhgg [3.705290041s]
Jan 27 12:24:45.261: INFO: Created: latency-svc-w65cn
Jan 27 12:24:45.275: INFO: Got endpoints: latency-svc-w65cn [3.191765692s]
Jan 27 12:24:45.315: INFO: Created: latency-svc-ssf5s
Jan 27 12:24:45.331: INFO: Got endpoints: latency-svc-ssf5s [2.945212902s]
Jan 27 12:24:45.470: INFO: Created: latency-svc-9gkfz
Jan 27 12:24:45.493: INFO: Got endpoints: latency-svc-9gkfz [2.936315185s]
Jan 27 12:24:45.579: INFO: Created: latency-svc-8ltg8
Jan 27 12:24:45.688: INFO: Got endpoints: latency-svc-8ltg8 [3.054046534s]
Jan 27 12:24:45.721: INFO: Created: latency-svc-r9t7c
Jan 27 12:24:45.751: INFO: Got endpoints: latency-svc-r9t7c [2.59995208s]
Jan 27 12:24:45.964: INFO: Created: latency-svc-qm9vb
Jan 27 12:24:45.981: INFO: Got endpoints: latency-svc-qm9vb [2.429069762s]
Jan 27 12:24:46.205: INFO: Created: latency-svc-wpf8s
Jan 27 12:24:46.234: INFO: Got endpoints: latency-svc-wpf8s [2.59510254s]
Jan 27 12:24:46.495: INFO: Created: latency-svc-8rj4r
Jan 27 12:24:46.526: INFO: Got endpoints: latency-svc-8rj4r [2.464463088s]
Jan 27 12:24:46.744: INFO: Created: latency-svc-vw4bj
Jan 27 12:24:46.837: INFO: Got endpoints: latency-svc-vw4bj [2.676211331s]
Jan 27 12:24:46.839: INFO: Created: latency-svc-6lsbg
Jan 27 12:24:46.926: INFO: Got endpoints: latency-svc-6lsbg [2.255724036s]
Jan 27 12:24:46.933: INFO: Created: latency-svc-c5f5v
Jan 27 12:24:46.971: INFO: Got endpoints: latency-svc-c5f5v [2.225492266s]
Jan 27 12:24:47.090: INFO: Created: latency-svc-p2cmq
Jan 27 12:24:47.100: INFO: Got endpoints: latency-svc-p2cmq [2.240369281s]
Jan 27 12:24:47.158: INFO: Created: latency-svc-496pc
Jan 27 12:24:47.260: INFO: Got endpoints: latency-svc-496pc [2.339651965s]
Jan 27 12:24:47.337: INFO: Created: latency-svc-tdq25
Jan 27 12:24:47.345: INFO: Got endpoints: latency-svc-tdq25 [2.233945848s]
Jan 27 12:24:47.501: INFO: Created: latency-svc-7hwrt
Jan 27 12:24:47.514: INFO: Got endpoints: latency-svc-7hwrt [2.263419821s]
Jan 27 12:24:47.563: INFO: Created: latency-svc-zmbks
Jan 27 12:24:47.670: INFO: Got endpoints: latency-svc-zmbks [2.394458332s]
Jan 27 12:24:47.694: INFO: Created: latency-svc-qsqwr
Jan 27 12:24:47.702: INFO: Got endpoints: latency-svc-qsqwr [2.370062079s]
Jan 27 12:24:47.774: INFO: Created: latency-svc-b4rt6
Jan 27 12:24:47.880: INFO: Got endpoints: latency-svc-b4rt6 [2.387068629s]
Jan 27 12:24:47.894: INFO: Created: latency-svc-cnkbz
Jan 27 12:24:47.905: INFO: Got endpoints: latency-svc-cnkbz [2.216195264s]
Jan 27 12:24:47.967: INFO: Created: latency-svc-4f97z
Jan 27 12:24:48.082: INFO: Got endpoints: latency-svc-4f97z [2.331086474s]
Jan 27 12:24:48.099: INFO: Created: latency-svc-4695p
Jan 27 12:24:48.125: INFO: Got endpoints: latency-svc-4695p [2.144123211s]
Jan 27 12:24:48.308: INFO: Created: latency-svc-mj2rw
Jan 27 12:24:48.313: INFO: Got endpoints: latency-svc-mj2rw [2.078272283s]
Jan 27 12:24:48.379: INFO: Created: latency-svc-t72md
Jan 27 12:24:48.482: INFO: Got endpoints: latency-svc-t72md [1.955818137s]
Jan 27 12:24:48.522: INFO: Created: latency-svc-nd58n
Jan 27 12:24:48.553: INFO: Got endpoints: latency-svc-nd58n [1.716313681s]
Jan 27 12:24:48.688: INFO: Created: latency-svc-8kxfp
Jan 27 12:24:48.724: INFO: Got endpoints: latency-svc-8kxfp [1.797210469s]
Jan 27 12:24:48.728: INFO: Created: latency-svc-9m4l2
Jan 27 12:24:48.734: INFO: Got endpoints: latency-svc-9m4l2 [1.763254259s]
Jan 27 12:24:48.938: INFO: Created: latency-svc-7rfn6
Jan 27 12:24:48.959: INFO: Got endpoints: latency-svc-7rfn6 [1.859325497s]
Jan 27 12:24:49.078: INFO: Created: latency-svc-czkkh
Jan 27 12:24:49.100: INFO: Got endpoints: latency-svc-czkkh [1.839520145s]
Jan 27 12:24:49.157: INFO: Created: latency-svc-lszd9
Jan 27 12:24:49.290: INFO: Got endpoints: latency-svc-lszd9 [1.945257334s]
Jan 27 12:24:49.375: INFO: Created: latency-svc-k66s2
Jan 27 12:24:49.552: INFO: Got endpoints: latency-svc-k66s2 [2.037501769s]
Jan 27 12:24:49.598: INFO: Created: latency-svc-tjxz8
Jan 27 12:24:49.732: INFO: Got endpoints: latency-svc-tjxz8 [2.062813814s]
Jan 27 12:24:49.764: INFO: Created: latency-svc-7mtdb
Jan 27 12:24:49.784: INFO: Got endpoints: latency-svc-7mtdb [2.082322125s]
Jan 27 12:24:49.954: INFO: Created: latency-svc-hbxpf
Jan 27 12:24:49.973: INFO: Got endpoints: latency-svc-hbxpf [2.092547471s]
Jan 27 12:24:50.033: INFO: Created: latency-svc-mxvfp
Jan 27 12:24:50.121: INFO: Got endpoints: latency-svc-mxvfp [2.216178602s]
Jan 27 12:24:50.150: INFO: Created: latency-svc-grtqx
Jan 27 12:24:50.170: INFO: Got endpoints: latency-svc-grtqx [2.087181205s]
Jan 27 12:24:50.386: INFO: Created: latency-svc-ljdz6
Jan 27 12:24:50.401: INFO: Got endpoints: latency-svc-ljdz6 [2.27561581s]
Jan 27 12:24:50.496: INFO: Created: latency-svc-z92b7
Jan 27 12:24:50.558: INFO: Got endpoints: latency-svc-z92b7 [2.244688856s]
Jan 27 12:24:50.599: INFO: Created: latency-svc-62htv
Jan 27 12:24:50.603: INFO: Got endpoints: latency-svc-62htv [2.121531392s]
Jan 27 12:24:50.723: INFO: Created: latency-svc-bgz5z
Jan 27 12:24:50.744: INFO: Got endpoints: latency-svc-bgz5z [2.19087528s]
Jan 27 12:24:50.792: INFO: Created: latency-svc-jbfg4
Jan 27 12:24:50.956: INFO: Got endpoints: latency-svc-jbfg4 [2.231941987s]
Jan 27 12:24:50.976: INFO: Created: latency-svc-k4m8b
Jan 27 12:24:50.993: INFO: Got endpoints: latency-svc-k4m8b [2.258890241s]
Jan 27 12:24:51.109: INFO: Created: latency-svc-kks69
Jan 27 12:24:51.127: INFO: Got endpoints: latency-svc-kks69 [2.167265993s]
Jan 27 12:24:51.191: INFO: Created: latency-svc-dnbdv
Jan 27 12:24:51.278: INFO: Got endpoints: latency-svc-dnbdv [2.178269809s]
Jan 27 12:24:51.298: INFO: Created: latency-svc-rmqh6
Jan 27 12:24:51.309: INFO: Got endpoints: latency-svc-rmqh6 [2.017729467s]
Jan 27 12:24:51.440: INFO: Created: latency-svc-5wgf6
Jan 27 12:24:51.452: INFO: Got endpoints: latency-svc-5wgf6 [1.899809688s]
Jan 27 12:24:51.518: INFO: Created: latency-svc-bmmn5
Jan 27 12:24:51.632: INFO: Got endpoints: latency-svc-bmmn5 [1.899289253s]
Jan 27 12:24:51.746: INFO: Created: latency-svc-mjhwf
Jan 27 12:24:51.950: INFO: Got endpoints: latency-svc-mjhwf [2.165865612s]
Jan 27 12:24:51.966: INFO: Created: latency-svc-rwf6q
Jan 27 12:24:51.976: INFO: Got endpoints: latency-svc-rwf6q [2.002940362s]
Jan 27 12:24:52.212: INFO: Created: latency-svc-fxb8j
Jan 27 12:24:52.266: INFO: Got endpoints: latency-svc-fxb8j [2.145184501s]
Jan 27 12:24:52.390: INFO: Created: latency-svc-v8q55
Jan 27 12:24:52.431: INFO: Got endpoints: latency-svc-v8q55 [2.260976896s]
Jan 27 12:24:52.656: INFO: Created: latency-svc-tknwc
Jan 27 12:24:52.703: INFO: Got endpoints: latency-svc-tknwc [2.301644563s]
Jan 27 12:24:52.821: INFO: Created: latency-svc-gt8tb
Jan 27 12:24:52.868: INFO: Got endpoints: latency-svc-gt8tb [2.310333794s]
Jan 27 12:24:53.000: INFO: Created: latency-svc-7f9st
Jan 27 12:24:53.011: INFO: Got endpoints: latency-svc-7f9st [2.407074577s]
Jan 27 12:24:53.049: INFO: Created: latency-svc-r2xsk
Jan 27 12:24:53.064: INFO: Got endpoints: latency-svc-r2xsk [2.319401613s]
Jan 27 12:24:53.272: INFO: Created: latency-svc-f67br
Jan 27 12:24:53.292: INFO: Got endpoints: latency-svc-f67br [2.335899848s]
Jan 27 12:24:53.420: INFO: Created: latency-svc-c2x9j
Jan 27 12:24:53.456: INFO: Got endpoints: latency-svc-c2x9j [2.462479541s]
Jan 27 12:24:53.500: INFO: Created: latency-svc-d4wn6
Jan 27 12:24:53.571: INFO: Got endpoints: latency-svc-d4wn6 [2.443886092s]
Jan 27 12:24:53.755: INFO: Created: latency-svc-8wgcb
Jan 27 12:24:53.775: INFO: Got endpoints: latency-svc-8wgcb [2.496995508s]
Jan 27 12:24:53.817: INFO: Created: latency-svc-m5fz6
Jan 27 12:24:53.931: INFO: Got endpoints: latency-svc-m5fz6 [2.621746998s]
Jan 27 12:24:53.958: INFO: Created: latency-svc-wwtpz
Jan 27 12:24:53.974: INFO: Got endpoints: latency-svc-wwtpz [2.522351924s]
Jan 27 12:24:54.148: INFO: Created: latency-svc-xfwks
Jan 27 12:24:54.159: INFO: Got endpoints: latency-svc-xfwks [2.52654092s]
Jan 27 12:24:54.253: INFO: Created: latency-svc-kqmpw
Jan 27 12:24:54.392: INFO: Got endpoints: latency-svc-kqmpw [2.441846936s]
Jan 27 12:24:54.460: INFO: Created: latency-svc-8bh27
Jan 27 12:24:54.614: INFO: Got endpoints: latency-svc-8bh27 [2.638669371s]
Jan 27 12:24:54.674: INFO: Created: latency-svc-mjf8x
Jan 27 12:24:54.678: INFO: Got endpoints: latency-svc-mjf8x [2.411726176s]
Jan 27 12:24:54.843: INFO: Created: latency-svc-zsxkv
Jan 27 12:24:54.878: INFO: Got endpoints: latency-svc-zsxkv [2.447008s]
Jan 27 12:24:54.998: INFO: Created: latency-svc-bsdth
Jan 27 12:24:55.014: INFO: Got endpoints: latency-svc-bsdth [2.311026704s]
Jan 27 12:24:55.054: INFO: Created: latency-svc-2x9vq
Jan 27 12:24:55.071: INFO: Got endpoints: latency-svc-2x9vq [2.203286801s]
Jan 27 12:24:55.152: INFO: Created: latency-svc-vg4fx
Jan 27 12:24:55.173: INFO: Got endpoints: latency-svc-vg4fx [2.162216337s]
Jan 27 12:24:55.213: INFO: Created: latency-svc-nsswq
Jan 27 12:24:55.230: INFO: Got endpoints: latency-svc-nsswq [2.166281579s]
Jan 27 12:24:55.399: INFO: Created: latency-svc-wqpdk
Jan 27 12:24:55.404: INFO: Got endpoints: latency-svc-wqpdk [2.112202186s]
Jan 27 12:24:55.627: INFO: Created: latency-svc-97c6g
Jan 27 12:24:55.657: INFO: Got endpoints: latency-svc-97c6g [2.201687973s]
Jan 27 12:24:55.786: INFO: Created: latency-svc-bb2j9
Jan 27 12:24:55.807: INFO: Got endpoints: latency-svc-bb2j9 [2.236331181s]
Jan 27 12:24:57.061: INFO: Created: latency-svc-m69js
Jan 27 12:24:57.105: INFO: Got endpoints: latency-svc-m69js [3.329316509s]
Jan 27 12:24:57.323: INFO: Created: latency-svc-6ct9b
Jan 27 12:24:57.357: INFO: Got endpoints: latency-svc-6ct9b [3.425901016s]
Jan 27 12:24:57.476: INFO: Created: latency-svc-plf67
Jan 27 12:24:57.494: INFO: Got endpoints: latency-svc-plf67 [3.519132442s]
Jan 27 12:24:57.567: INFO: Created: latency-svc-ntbtp
Jan 27 12:24:57.622: INFO: Got endpoints: latency-svc-ntbtp [3.463636271s]
Jan 27 12:24:57.706: INFO: Created: latency-svc-zffnf
Jan 27 12:24:57.723: INFO: Got endpoints: latency-svc-zffnf [3.330450033s]
Jan 27 12:24:57.833: INFO: Created: latency-svc-f4ctd
Jan 27 12:24:57.888: INFO: Got endpoints: latency-svc-f4ctd [3.27344146s]
Jan 27 12:24:57.961: INFO: Created: latency-svc-ksxpw
Jan 27 12:24:57.974: INFO: Got endpoints: latency-svc-ksxpw [3.295457401s]
Jan 27 12:24:58.029: INFO: Created: latency-svc-gpkvj
Jan 27 12:24:58.049: INFO: Got endpoints: latency-svc-gpkvj [3.171473261s]
Jan 27 12:24:58.201: INFO: Created: latency-svc-bh72f
Jan 27 12:24:58.215: INFO: Got endpoints: latency-svc-bh72f [3.200752674s]
Jan 27 12:24:58.257: INFO: Created: latency-svc-8ckvv
Jan 27 12:24:58.404: INFO: Got endpoints: latency-svc-8ckvv [3.332331162s]
Jan 27 12:24:58.433: INFO: Created: latency-svc-glmhw
Jan 27 12:24:58.489: INFO: Got endpoints: latency-svc-glmhw [3.316240021s]
Jan 27 12:24:58.501: INFO: Created: latency-svc-wtshq
Jan 27 12:24:58.596: INFO: Got endpoints: latency-svc-wtshq [3.366049402s]
Jan 27 12:24:58.637: INFO: Created: latency-svc-88zwf
Jan 27 12:24:58.681: INFO: Got endpoints: latency-svc-88zwf [3.276242563s]
Jan 27 12:24:58.835: INFO: Created: latency-svc-slf5p
Jan 27 12:24:58.868: INFO: Got endpoints: latency-svc-slf5p [3.2106211s]
Jan 27 12:24:59.008: INFO: Created: latency-svc-pkq6z
Jan 27 12:24:59.029: INFO: Got endpoints: latency-svc-pkq6z [3.22193288s]
Jan 27 12:24:59.088: INFO: Created: latency-svc-lmv4r
Jan 27 12:24:59.091: INFO: Got endpoints: latency-svc-lmv4r [1.98602457s]
Jan 27 12:24:59.250: INFO: Created: latency-svc-6lhwh
Jan 27 12:24:59.268: INFO: Got endpoints: latency-svc-6lhwh [1.91098999s]
Jan 27 12:24:59.422: INFO: Created: latency-svc-42cg5
Jan 27 12:24:59.453: INFO: Got endpoints: latency-svc-42cg5 [1.959701221s]
Jan 27 12:24:59.695: INFO: Created: latency-svc-z6dp4
Jan 27 12:24:59.775: INFO: Created: latency-svc-n5xfl
Jan 27 12:24:59.938: INFO: Created: latency-svc-mzdsz
Jan 27 12:24:59.957: INFO: Got endpoints: latency-svc-z6dp4 [2.334362861s]
Jan 27 12:24:59.958: INFO: Got endpoints: latency-svc-mzdsz [2.070020557s]
Jan 27 12:25:00.010: INFO: Created: latency-svc-k4v2w
Jan 27 12:25:00.018: INFO: Got endpoints: latency-svc-n5xfl [2.295496225s]
Jan 27 12:25:00.105: INFO: Got endpoints: latency-svc-k4v2w [2.131512461s]
Jan 27 12:25:00.144: INFO: Created: latency-svc-qg5bg
Jan 27 12:25:00.173: INFO: Got endpoints: latency-svc-qg5bg [2.123223097s]
Jan 27 12:25:00.288: INFO: Created: latency-svc-2v5jk
Jan 27 12:25:00.307: INFO: Got endpoints: latency-svc-2v5jk [2.091975667s]
Jan 27 12:25:00.357: INFO: Created: latency-svc-fblxp
Jan 27 12:25:00.373: INFO: Got endpoints: latency-svc-fblxp [1.969163467s]
Jan 27 12:25:00.478: INFO: Created: latency-svc-tj99g
Jan 27 12:25:00.518: INFO: Got endpoints: latency-svc-tj99g [2.028325039s]
Jan 27 12:25:00.566: INFO: Created: latency-svc-pfprp
Jan 27 12:25:00.706: INFO: Created: latency-svc-tqdx7
Jan 27 12:25:00.706: INFO: Got endpoints: latency-svc-pfprp [2.109310232s]
Jan 27 12:25:00.721: INFO: Got endpoints: latency-svc-tqdx7 [2.039914336s]
Jan 27 12:25:00.913: INFO: Created: latency-svc-hvf98
Jan 27 12:25:00.939: INFO: Got endpoints: latency-svc-hvf98 [2.070056139s]
Jan 27 12:25:01.259: INFO: Created: latency-svc-tfvbl
Jan 27 12:25:01.619: INFO: Got endpoints: latency-svc-tfvbl [2.58990101s]
Jan 27 12:25:01.634: INFO: Created: latency-svc-xtnp6
Jan 27 12:25:01.653: INFO: Got endpoints: latency-svc-xtnp6 [2.561420014s]
Jan 27 12:25:01.985: INFO: Created: latency-svc-lhwdq
Jan 27 12:25:02.076: INFO: Got endpoints: latency-svc-lhwdq [2.808270338s]
Jan 27 12:25:02.276: INFO: Created: latency-svc-mwtfx
Jan 27 12:25:02.324: INFO: Got endpoints: latency-svc-mwtfx [2.870201586s]
Jan 27 12:25:02.541: INFO: Created: latency-svc-hg5q9
Jan 27 12:25:02.560: INFO: Got endpoints: latency-svc-hg5q9 [2.602777601s]
Jan 27 12:25:02.621: INFO: Created: latency-svc-wjvm9
Jan 27 12:25:02.689: INFO: Got endpoints: latency-svc-wjvm9 [2.7310002s]
Jan 27 12:25:02.732: INFO: Created: latency-svc-9nl8h
Jan 27 12:25:02.745: INFO: Got endpoints: latency-svc-9nl8h [2.726744152s]
Jan 27 12:25:02.780: INFO: Created: latency-svc-kq8sh
Jan 27 12:25:02.906: INFO: Created: latency-svc-lldjx
Jan 27 12:25:02.906: INFO: Got endpoints: latency-svc-kq8sh [2.800212211s]
Jan 27 12:25:02.913: INFO: Got endpoints: latency-svc-lldjx [2.740518768s]
Jan 27 12:25:02.959: INFO: Created: latency-svc-bqqq7
Jan 27 12:25:03.062: INFO: Got endpoints: latency-svc-bqqq7 [2.754286094s]
Jan 27 12:25:03.092: INFO: Created: latency-svc-t8jkq
Jan 27 12:25:03.108: INFO: Got endpoints: latency-svc-t8jkq [2.734959872s]
Jan 27 12:25:03.228: INFO: Created: latency-svc-4mffr
Jan 27 12:25:03.238: INFO: Got endpoints: latency-svc-4mffr [2.72049684s]
Jan 27 12:25:03.294: INFO: Created: latency-svc-l86d2
Jan 27 12:25:03.308: INFO: Got endpoints: latency-svc-l86d2 [2.601779712s]
Jan 27 12:25:03.426: INFO: Created: latency-svc-js2jk
Jan 27 12:25:03.446: INFO: Got endpoints: latency-svc-js2jk [2.725140533s]
Jan 27 12:25:03.488: INFO: Created: latency-svc-h9kp5
Jan 27 12:25:03.616: INFO: Got endpoints: latency-svc-h9kp5 [2.677094908s]
Jan 27 12:25:03.637: INFO: Created: latency-svc-kr5c2
Jan 27 12:25:03.653: INFO: Got endpoints: latency-svc-kr5c2 [2.033201966s]
Jan 27 12:25:03.720: INFO: Created: latency-svc-qs76x
Jan 27 12:25:03.805: INFO: Got endpoints: latency-svc-qs76x [2.152128798s]
Jan 27 12:25:03.836: INFO: Created: latency-svc-c6cwf
Jan 27 12:25:03.864: INFO: Got endpoints: latency-svc-c6cwf [1.787648769s]
Jan 27 12:25:03.995: INFO: Created: latency-svc-ktx6w
Jan 27 12:25:04.006: INFO: Got endpoints: latency-svc-ktx6w [1.681941609s]
Jan 27 12:25:04.062: INFO: Created: latency-svc-s4fn8
Jan 27 12:25:04.169: INFO: Got endpoints: latency-svc-s4fn8 [1.608904773s]
Jan 27 12:25:04.174: INFO: Created: latency-svc-vnnvg
Jan 27 12:25:04.189: INFO: Got endpoints: latency-svc-vnnvg [1.499896191s]
Jan 27 12:25:04.330: INFO: Created: latency-svc-4tj9w
Jan 27 12:25:04.348: INFO: Created: latency-svc-7w8xp
Jan 27 12:25:04.349: INFO: Got endpoints: latency-svc-4tj9w [1.604146239s]
Jan 27 12:25:04.365: INFO: Got endpoints: latency-svc-7w8xp [1.458159106s]
Jan 27 12:25:04.408: INFO: Created: latency-svc-6q9wl
Jan 27 12:25:04.509: INFO: Got endpoints: latency-svc-6q9wl [1.595172024s]
Jan 27 12:25:04.530: INFO: Created: latency-svc-s5kls
Jan 27 12:25:04.547: INFO: Got endpoints: latency-svc-s5kls [1.484449095s]
Jan 27 12:25:04.599: INFO: Created: latency-svc-kslhh
Jan 27 12:25:04.696: INFO: Got endpoints: latency-svc-kslhh [1.587155886s]
Jan 27 12:25:04.719: INFO: Created: latency-svc-6967p
Jan 27 12:25:04.721: INFO: Got endpoints: latency-svc-6967p [1.482547713s]
Jan 27 12:25:04.776: INFO: Created: latency-svc-wwnkt
Jan 27 12:25:04.913: INFO: Got endpoints: latency-svc-wwnkt [1.604637415s]
Jan 27 12:25:04.930: INFO: Created: latency-svc-64d4j
Jan 27 12:25:04.971: INFO: Got endpoints: latency-svc-64d4j [1.525213062s]
Jan 27 12:25:04.973: INFO: Created: latency-svc-md4p6
Jan 27 12:25:04.987: INFO: Got endpoints: latency-svc-md4p6 [1.370355695s]
Jan 27 12:25:05.094: INFO: Created: latency-svc-x5gl7
Jan 27 12:25:05.098: INFO: Got endpoints: latency-svc-x5gl7 [1.444806565s]
Jan 27 12:25:05.153: INFO: Created: latency-svc-6nw4t
Jan 27 12:25:05.161: INFO: Got endpoints: latency-svc-6nw4t [1.355775868s]
Jan 27 12:25:05.224: INFO: Created: latency-svc-8s8bn
Jan 27 12:25:05.242: INFO: Got endpoints: latency-svc-8s8bn [1.377952277s]
Jan 27 12:25:05.301: INFO: Created: latency-svc-2vzvl
Jan 27 12:25:05.322: INFO: Got endpoints: latency-svc-2vzvl [1.315685897s]
Jan 27 12:25:05.449: INFO: Created: latency-svc-mr6n6
Jan 27 12:25:05.519: INFO: Created: latency-svc-nmrx8
Jan 27 12:25:05.519: INFO: Got endpoints: latency-svc-mr6n6 [1.350535242s]
Jan 27 12:25:05.664: INFO: Got endpoints: latency-svc-nmrx8 [1.474933696s]
Jan 27 12:25:05.677: INFO: Created: latency-svc-frfqg
Jan 27 12:25:05.690: INFO: Got endpoints: latency-svc-frfqg [1.340897699s]
Jan 27 12:25:05.744: INFO: Created: latency-svc-v85zv
Jan 27 12:25:05.749: INFO: Got endpoints: latency-svc-v85zv [1.384315583s]
Jan 27 12:25:05.909: INFO: Created: latency-svc-6lq44
Jan 27 12:25:05.944: INFO: Got endpoints: latency-svc-6lq44 [1.434688817s]
Jan 27 12:25:06.137: INFO: Created: latency-svc-ghjpq
Jan 27 12:25:06.329: INFO: Got endpoints: latency-svc-ghjpq [1.78219693s]
Jan 27 12:25:06.358: INFO: Created: latency-svc-g5lw4
Jan 27 12:25:06.380: INFO: Got endpoints: latency-svc-g5lw4 [1.684279699s]
Jan 27 12:25:06.425: INFO: Created: latency-svc-84lkx
Jan 27 12:25:06.565: INFO: Got endpoints: latency-svc-84lkx [1.844347095s]
Jan 27 12:25:06.729: INFO: Created: latency-svc-76zxd
Jan 27 12:25:06.760: INFO: Got endpoints: latency-svc-76zxd [1.846990286s]
Jan 27 12:25:06.966: INFO: Created: latency-svc-v65ms
Jan 27 12:25:06.987: INFO: Got endpoints: latency-svc-v65ms [2.015930442s]
Jan 27 12:25:07.041: INFO: Created: latency-svc-2cb8z
Jan 27 12:25:07.160: INFO: Got endpoints: latency-svc-2cb8z [2.173356891s]
Jan 27 12:25:07.181: INFO: Created: latency-svc-6mps9
Jan 27 12:25:07.196: INFO: Got endpoints: latency-svc-6mps9 [2.098467737s]
Jan 27 12:25:07.266: INFO: Created: latency-svc-djgfj
Jan 27 12:25:07.433: INFO: Got endpoints: latency-svc-djgfj [2.271934979s]
Jan 27 12:25:07.493: INFO: Created: latency-svc-mq4j4
Jan 27 12:25:07.502: INFO: Got endpoints: latency-svc-mq4j4 [2.259560772s]
Jan 27 12:25:07.708: INFO: Created: latency-svc-mgq7x
Jan 27 12:25:08.630: INFO: Got endpoints: latency-svc-mgq7x [3.308206629s]
Jan 27 12:25:09.097: INFO: Created: latency-svc-5tnbd
Jan 27 12:25:09.124: INFO: Got endpoints: latency-svc-5tnbd [3.604118334s]
Jan 27 12:25:09.282: INFO: Created: latency-svc-8ccgq
Jan 27 12:25:09.289: INFO: Got endpoints: latency-svc-8ccgq [3.624150262s]
Jan 27 12:25:09.344: INFO: Created: latency-svc-k4cf2
Jan 27 12:25:09.447: INFO: Got endpoints: latency-svc-k4cf2 [3.756423808s]
Jan 27 12:25:09.510: INFO: Created: latency-svc-px2l5
Jan 27 12:25:09.529: INFO: Got endpoints: latency-svc-px2l5 [3.77990641s]
Jan 27 12:25:09.746: INFO: Created: latency-svc-np8tc
Jan 27 12:25:09.746: INFO: Got endpoints: latency-svc-np8tc [3.802159076s]
Jan 27 12:25:09.802: INFO: Created: latency-svc-fmq42
Jan 27 12:25:09.914: INFO: Got endpoints: latency-svc-fmq42 [3.584678967s]
Jan 27 12:25:09.926: INFO: Created: latency-svc-vskbw
Jan 27 12:25:09.941: INFO: Got endpoints: latency-svc-vskbw [3.561264222s]
Jan 27 12:25:09.988: INFO: Created: latency-svc-f8pkv
Jan 27 12:25:10.121: INFO: Got endpoints: latency-svc-f8pkv [3.554967159s]
Jan 27 12:25:10.139: INFO: Created: latency-svc-2h5w5
Jan 27 12:25:10.150: INFO: Got endpoints: latency-svc-2h5w5 [3.389810325s]
Jan 27 12:25:10.189: INFO: Created: latency-svc-rwjmv
Jan 27 12:25:10.303: INFO: Got endpoints: latency-svc-rwjmv [3.316077177s]
Jan 27 12:25:10.327: INFO: Created: latency-svc-9hdhk
Jan 27 12:25:10.361: INFO: Got endpoints: latency-svc-9hdhk [3.20054004s]
Jan 27 12:25:10.517: INFO: Created: latency-svc-4r9h6
Jan 27 12:25:10.550: INFO: Got endpoints: latency-svc-4r9h6 [3.353708768s]
Jan 27 12:25:10.596: INFO: Created: latency-svc-lrch2
Jan 27 12:25:10.687: INFO: Got endpoints: latency-svc-lrch2 [3.253629188s]
Jan 27 12:25:10.707: INFO: Created: latency-svc-5nz6f
Jan 27 12:25:10.730: INFO: Got endpoints: latency-svc-5nz6f [3.227699478s]
Jan 27 12:25:10.769: INFO: Created: latency-svc-hmk72
Jan 27 12:25:10.931: INFO: Got endpoints: latency-svc-hmk72 [2.301328662s]
Jan 27 12:25:10.969: INFO: Created: latency-svc-hqtfd
Jan 27 12:25:11.002: INFO: Got endpoints: latency-svc-hqtfd [1.878600107s]
Jan 27 12:25:11.097: INFO: Created: latency-svc-mm7h2
Jan 27 12:25:11.144: INFO: Got endpoints: latency-svc-mm7h2 [1.855015544s]
Jan 27 12:25:11.154: INFO: Created: latency-svc-kt7t6
Jan 27 12:25:11.166: INFO: Got endpoints: latency-svc-kt7t6 [1.719337978s]
Jan 27 12:25:11.281: INFO: Created: latency-svc-6d8bk
Jan 27 12:25:11.295: INFO: Got endpoints: latency-svc-6d8bk [1.765784368s]
Jan 27 12:25:11.357: INFO: Created: latency-svc-7cmpc
Jan 27 12:25:11.465: INFO: Got endpoints: latency-svc-7cmpc [1.719190748s]
Jan 27 12:25:11.487: INFO: Created: latency-svc-bqnhx
Jan 27 12:25:11.529: INFO: Got endpoints: latency-svc-bqnhx [1.614711133s]
Jan 27 12:25:11.699: INFO: Created: latency-svc-lxfrp
Jan 27 12:25:11.704: INFO: Got endpoints: latency-svc-lxfrp [1.76271926s]
Jan 27 12:25:11.795: INFO: Created: latency-svc-d2q7p
Jan 27 12:25:11.943: INFO: Got endpoints: latency-svc-d2q7p [1.822383697s]
Jan 27 12:25:11.956: INFO: Created: latency-svc-6stdc
Jan 27 12:25:11.969: INFO: Got endpoints: latency-svc-6stdc [1.819395194s]
Jan 27 12:25:11.969: INFO: Latencies: [215.435926ms 344.735045ms 375.947084ms 419.283646ms 696.000651ms 932.601872ms 1.12287913s 1.315685897s 1.340897699s 1.342991835s 1.350535242s 1.355775868s 1.370355695s 1.370361967s 1.377952277s 1.384315583s 1.434688817s 1.444806565s 1.458159106s 1.474933696s 1.482547713s 1.484449095s 1.499896191s 1.525213062s 1.587155886s 1.595172024s 1.604146239s 1.604637415s 1.608904773s 1.614711133s 1.681941609s 1.684279699s 1.716313681s 1.716560855s 1.719190748s 1.719337978s 1.76271926s 1.763254259s 1.765784368s 1.78219693s 1.787648769s 1.797210469s 1.819395194s 1.822383697s 1.839520145s 1.844347095s 1.846990286s 1.855015544s 1.859325497s 1.878600107s 1.899289253s 1.899809688s 1.91098999s 1.945257334s 1.955818137s 1.959701221s 1.963634242s 1.969163467s 1.98602457s 2.002940362s 2.015930442s 2.017729467s 2.028325039s 2.033201966s 2.037501769s 2.039914336s 2.062813814s 2.070020557s 2.070056139s 2.078272283s 2.082322125s 2.087181205s 2.091975667s 2.092547471s 2.098467737s 2.109310232s 2.112202186s 2.121531392s 2.123223097s 2.131512461s 2.144123211s 2.145184501s 2.152128798s 2.162216337s 2.165865612s 2.166281579s 2.167265993s 2.173356891s 2.178269809s 2.19087528s 2.201687973s 2.203286801s 2.216178602s 2.216195264s 2.225492266s 2.231941987s 2.233945848s 2.236331181s 2.240369281s 2.244688856s 2.255724036s 2.258890241s 2.259560772s 2.260976896s 2.263419821s 2.271934979s 2.27561581s 2.295496225s 2.301328662s 2.301644563s 2.310333794s 2.311026704s 2.319401613s 2.331086474s 2.334362861s 2.335899848s 2.339651965s 2.370062079s 2.387068629s 2.394458332s 2.407074577s 2.411726176s 2.429069762s 2.441846936s 2.443886092s 2.447008s 2.459302326s 2.462479541s 2.464463088s 2.496995508s 2.501332906s 2.522351924s 2.52654092s 2.561420014s 2.58990101s 2.59510254s 2.59995208s 2.601779712s 2.602777601s 2.621746998s 2.638669371s 2.676211331s 2.677094908s 2.72049684s 2.725140533s 2.726744152s 2.7310002s 2.734959872s 2.740518768s 2.754286094s 2.800212211s 2.808270338s 2.870201586s 2.936315185s 2.945212902s 2.974402086s 3.052216335s 3.054046534s 3.171473261s 3.191765692s 3.20054004s 3.200752674s 3.2106211s 3.22193288s 3.227699478s 3.253629188s 3.27344146s 3.276242563s 3.295457401s 3.308206629s 3.316077177s 3.316240021s 3.329316509s 3.330450033s 3.332331162s 3.353708768s 3.366049402s 3.389810325s 3.425901016s 3.463636271s 3.519132442s 3.554967159s 3.561264222s 3.569172498s 3.584678967s 3.604118334s 3.624150262s 3.682174847s 3.705290041s 3.755581292s 3.756423808s 3.77990641s 3.802159076s 3.81269847s 3.883206451s 3.934750917s 3.968957965s 4.040787873s 4.060843067s 4.156301413s]
Jan 27 12:25:11.970: INFO: 50 %ile: 2.255724036s
Jan 27 12:25:11.970: INFO: 90 %ile: 3.519132442s
Jan 27 12:25:11.970: INFO: 99 %ile: 4.060843067s
Jan 27 12:25:11.970: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:25:11.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-rg8sf" for this suite.
Jan 27 12:26:04.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:26:04.387: INFO: namespace: e2e-tests-svc-latency-rg8sf, resource: bindings, ignored listing per whitelist
Jan 27 12:26:04.438: INFO: namespace e2e-tests-svc-latency-rg8sf deletion completed in 52.462399846s

• [SLOW TEST:95.344 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:26:04.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-lj54
STEP: Creating a pod to test atomic-volume-subpath
Jan 27 12:26:04.810: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lj54" in namespace "e2e-tests-subpath-mjng8" to be "success or failure"
Jan 27 12:26:04.829: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 19.304204ms
Jan 27 12:26:07.368: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557939573s
Jan 27 12:26:09.383: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572602112s
Jan 27 12:26:11.399: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588948194s
Jan 27 12:26:13.424: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.614257726s
Jan 27 12:26:15.436: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 10.626272089s
Jan 27 12:26:17.992: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 13.182152445s
Jan 27 12:26:20.010: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 15.20011519s
Jan 27 12:26:22.027: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Pending", Reason="", readiness=false. Elapsed: 17.216580165s
Jan 27 12:26:24.045: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 19.234982636s
Jan 27 12:26:26.061: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 21.250873298s
Jan 27 12:26:28.079: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 23.269457116s
Jan 27 12:26:30.094: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 25.284397555s
Jan 27 12:26:32.108: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 27.297738327s
Jan 27 12:26:34.117: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 29.307355562s
Jan 27 12:26:36.135: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 31.324889993s
Jan 27 12:26:38.158: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 33.348167406s
Jan 27 12:26:40.185: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Running", Reason="", readiness=false. Elapsed: 35.374996547s
Jan 27 12:26:42.216: INFO: Pod "pod-subpath-test-projected-lj54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.405515976s
STEP: Saw pod success
Jan 27 12:26:42.216: INFO: Pod "pod-subpath-test-projected-lj54" satisfied condition "success or failure"
Jan 27 12:26:42.227: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-lj54 container test-container-subpath-projected-lj54: 
STEP: delete the pod
Jan 27 12:26:42.972: INFO: Waiting for pod pod-subpath-test-projected-lj54 to disappear
Jan 27 12:26:43.568: INFO: Pod pod-subpath-test-projected-lj54 no longer exists
STEP: Deleting pod pod-subpath-test-projected-lj54
Jan 27 12:26:43.568: INFO: Deleting pod "pod-subpath-test-projected-lj54" in namespace "e2e-tests-subpath-mjng8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:26:43.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mjng8" for this suite.
Jan 27 12:26:49.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:26:49.967: INFO: namespace: e2e-tests-subpath-mjng8, resource: bindings, ignored listing per whitelist
Jan 27 12:26:50.053: INFO: namespace e2e-tests-subpath-mjng8 deletion completed in 6.437811878s

• [SLOW TEST:45.614 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:26:50.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 27 12:26:50.344: INFO: Waiting up to 5m0s for pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005" in namespace "e2e-tests-containers-rrkr4" to be "success or failure"
Jan 27 12:26:50.383: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.108499ms
Jan 27 12:26:52.431: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086831965s
Jan 27 12:26:54.447: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102396404s
Jan 27 12:26:56.985: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640248069s
Jan 27 12:26:58.998: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.653298998s
Jan 27 12:27:01.481: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.136822749s
STEP: Saw pod success
Jan 27 12:27:01.481: INFO: Pod "client-containers-4b0d2372-4100-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:27:02.212: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4b0d2372-4100-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 12:27:02.508: INFO: Waiting for pod client-containers-4b0d2372-4100-11ea-b652-0242ac110005 to disappear
Jan 27 12:27:02.523: INFO: Pod client-containers-4b0d2372-4100-11ea-b652-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:27:02.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-rrkr4" for this suite.
Jan 27 12:27:08.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:27:08.780: INFO: namespace: e2e-tests-containers-rrkr4, resource: bindings, ignored listing per whitelist
Jan 27 12:27:08.859: INFO: namespace e2e-tests-containers-rrkr4 deletion completed in 6.320215186s

• [SLOW TEST:18.805 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:27:08.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4v8rw
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4v8rw
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4v8rw
Jan 27 12:27:09.171: INFO: Found 0 stateful pods, waiting for 1
Jan 27 12:27:19.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 27 12:27:19.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 27 12:27:19.836: INFO: stderr: "I0127 12:27:19.451544    3097 log.go:172] (0xc000710370) (0xc000734640) Create stream\nI0127 12:27:19.451766    3097 log.go:172] (0xc000710370) (0xc000734640) Stream added, broadcasting: 1\nI0127 12:27:19.457591    3097 log.go:172] (0xc000710370) Reply frame received for 1\nI0127 12:27:19.457626    3097 log.go:172] (0xc000710370) (0xc000596d20) Create stream\nI0127 12:27:19.457668    3097 log.go:172] (0xc000710370) (0xc000596d20) Stream added, broadcasting: 3\nI0127 12:27:19.458798    3097 log.go:172] (0xc000710370) Reply frame received for 3\nI0127 12:27:19.458838    3097 log.go:172] (0xc000710370) (0xc0007346e0) Create stream\nI0127 12:27:19.458846    3097 log.go:172] (0xc000710370) (0xc0007346e0) Stream added, broadcasting: 5\nI0127 12:27:19.460034    3097 log.go:172] (0xc000710370) Reply frame received for 5\nI0127 12:27:19.677153    3097 log.go:172] (0xc000710370) Data frame received for 3\nI0127 12:27:19.677214    3097 log.go:172] (0xc000596d20) (3) Data frame handling\nI0127 12:27:19.677233    3097 log.go:172] (0xc000596d20) (3) Data frame sent\nI0127 12:27:19.825083    3097 log.go:172] (0xc000710370) Data frame received for 1\nI0127 12:27:19.825152    3097 log.go:172] (0xc000734640) (1) Data frame handling\nI0127 12:27:19.825189    3097 log.go:172] (0xc000734640) (1) Data frame sent\nI0127 12:27:19.825369    3097 log.go:172] (0xc000710370) (0xc000734640) Stream removed, broadcasting: 1\nI0127 12:27:19.825823    3097 log.go:172] (0xc000710370) (0xc000596d20) Stream removed, broadcasting: 3\nI0127 12:27:19.826515    3097 log.go:172] (0xc000710370) (0xc0007346e0) Stream removed, broadcasting: 5\nI0127 12:27:19.826600    3097 log.go:172] (0xc000710370) (0xc000734640) Stream removed, broadcasting: 1\nI0127 12:27:19.826633    3097 log.go:172] (0xc000710370) (0xc000596d20) Stream removed, broadcasting: 3\nI0127 12:27:19.826642    3097 log.go:172] (0xc000710370) (0xc0007346e0) Stream removed, broadcasting: 5\n"
Jan 27 12:27:19.836: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 27 12:27:19.836: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 27 12:27:19.913: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 27 12:27:29.932: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 27 12:27:29.932: INFO: Waiting for statefulset status.replicas updated to 0
Jan 27 12:27:29.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999484s
Jan 27 12:27:31.004: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982699947s
Jan 27 12:27:32.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96317552s
Jan 27 12:27:33.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.941685648s
Jan 27 12:27:34.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.924448667s
Jan 27 12:27:35.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.901540009s
Jan 27 12:27:36.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.88251498s
Jan 27 12:27:37.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.859836275s
Jan 27 12:27:38.159: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.838204004s
Jan 27 12:27:39.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 807.400314ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4v8rw
Jan 27 12:27:40.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:27:40.864: INFO: stderr: "I0127 12:27:40.430010    3118 log.go:172] (0xc0008b22c0) (0xc000768640) Create stream\nI0127 12:27:40.430130    3118 log.go:172] (0xc0008b22c0) (0xc000768640) Stream added, broadcasting: 1\nI0127 12:27:40.436489    3118 log.go:172] (0xc0008b22c0) Reply frame received for 1\nI0127 12:27:40.436519    3118 log.go:172] (0xc0008b22c0) (0xc0005c6c80) Create stream\nI0127 12:27:40.436529    3118 log.go:172] (0xc0008b22c0) (0xc0005c6c80) Stream added, broadcasting: 3\nI0127 12:27:40.437801    3118 log.go:172] (0xc0008b22c0) Reply frame received for 3\nI0127 12:27:40.437846    3118 log.go:172] (0xc0008b22c0) (0xc0005c6dc0) Create stream\nI0127 12:27:40.437854    3118 log.go:172] (0xc0008b22c0) (0xc0005c6dc0) Stream added, broadcasting: 5\nI0127 12:27:40.439199    3118 log.go:172] (0xc0008b22c0) Reply frame received for 5\nI0127 12:27:40.705870    3118 log.go:172] (0xc0008b22c0) Data frame received for 3\nI0127 12:27:40.706248    3118 log.go:172] (0xc0005c6c80) (3) Data frame handling\nI0127 12:27:40.706278    3118 log.go:172] (0xc0005c6c80) (3) Data frame sent\nI0127 12:27:40.852941    3118 log.go:172] (0xc0008b22c0) Data frame received for 1\nI0127 12:27:40.853024    3118 log.go:172] (0xc000768640) (1) Data frame handling\nI0127 12:27:40.853049    3118 log.go:172] (0xc000768640) (1) Data frame sent\nI0127 12:27:40.853924    3118 log.go:172] (0xc0008b22c0) (0xc000768640) Stream removed, broadcasting: 1\nI0127 12:27:40.854344    3118 log.go:172] (0xc0008b22c0) (0xc0005c6c80) Stream removed, broadcasting: 3\nI0127 12:27:40.854646    3118 log.go:172] (0xc0008b22c0) (0xc0005c6dc0) Stream removed, broadcasting: 5\nI0127 12:27:40.854678    3118 log.go:172] (0xc0008b22c0) (0xc000768640) Stream removed, broadcasting: 1\nI0127 12:27:40.854688    3118 log.go:172] (0xc0008b22c0) (0xc0005c6c80) Stream removed, broadcasting: 3\nI0127 12:27:40.854699    3118 log.go:172] (0xc0008b22c0) (0xc0005c6dc0) Stream removed, broadcasting: 5\nI0127 12:27:40.855065    3118 log.go:172] (0xc0008b22c0) Go away received\n"
Jan 27 12:27:40.864: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 27 12:27:40.864: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 27 12:27:40.884: INFO: Found 1 stateful pods, waiting for 3
Jan 27 12:27:50.911: INFO: Found 2 stateful pods, waiting for 3
Jan 27 12:28:00.899: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:28:00.899: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:28:00.899: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 27 12:28:10.907: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:28:10.907: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 27 12:28:10.907: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 27 12:28:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 27 12:28:11.616: INFO: stderr: "I0127 12:28:11.193525    3140 log.go:172] (0xc0001389a0) (0xc0005ff540) Create stream\nI0127 12:28:11.193737    3140 log.go:172] (0xc0001389a0) (0xc0005ff540) Stream added, broadcasting: 1\nI0127 12:28:11.198931    3140 log.go:172] (0xc0001389a0) Reply frame received for 1\nI0127 12:28:11.198972    3140 log.go:172] (0xc0001389a0) (0xc0007be000) Create stream\nI0127 12:28:11.198982    3140 log.go:172] (0xc0001389a0) (0xc0007be000) Stream added, broadcasting: 3\nI0127 12:28:11.200161    3140 log.go:172] (0xc0001389a0) Reply frame received for 3\nI0127 12:28:11.200187    3140 log.go:172] (0xc0001389a0) (0xc0008b0000) Create stream\nI0127 12:28:11.200197    3140 log.go:172] (0xc0001389a0) (0xc0008b0000) Stream added, broadcasting: 5\nI0127 12:28:11.201062    3140 log.go:172] (0xc0001389a0) Reply frame received for 5\nI0127 12:28:11.330509    3140 log.go:172] (0xc0001389a0) Data frame received for 3\nI0127 12:28:11.330670    3140 log.go:172] (0xc0007be000) (3) Data frame handling\nI0127 12:28:11.330689    3140 log.go:172] (0xc0007be000) (3) Data frame sent\nI0127 12:28:11.610104    3140 log.go:172] (0xc0001389a0) (0xc0007be000) Stream removed, broadcasting: 3\nI0127 12:28:11.610287    3140 log.go:172] (0xc0001389a0) Data frame received for 1\nI0127 12:28:11.610296    3140 log.go:172] (0xc0005ff540) (1) Data frame handling\nI0127 12:28:11.610303    3140 log.go:172] (0xc0005ff540) (1) Data frame sent\nI0127 12:28:11.610312    3140 log.go:172] (0xc0001389a0) (0xc0005ff540) Stream removed, broadcasting: 1\nI0127 12:28:11.610494    3140 log.go:172] (0xc0001389a0) (0xc0008b0000) Stream removed, broadcasting: 5\nI0127 12:28:11.610514    3140 log.go:172] (0xc0001389a0) (0xc0005ff540) Stream removed, broadcasting: 1\nI0127 12:28:11.610518    3140 log.go:172] (0xc0001389a0) (0xc0007be000) Stream removed, broadcasting: 3\nI0127 12:28:11.610528    3140 log.go:172] (0xc0001389a0) (0xc0008b0000) Stream removed, broadcasting: 5\nI0127 12:28:11.610851    3140 log.go:172] (0xc0001389a0) Go away received\n"
Jan 27 12:28:11.616: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 27 12:28:11.616: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 27 12:28:11.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 27 12:28:12.276: INFO: stderr: "I0127 12:28:11.809902    3161 log.go:172] (0xc0008742c0) (0xc0008b45a0) Create stream\nI0127 12:28:11.810219    3161 log.go:172] (0xc0008742c0) (0xc0008b45a0) Stream added, broadcasting: 1\nI0127 12:28:11.816306    3161 log.go:172] (0xc0008742c0) Reply frame received for 1\nI0127 12:28:11.816355    3161 log.go:172] (0xc0008742c0) (0xc000710000) Create stream\nI0127 12:28:11.816361    3161 log.go:172] (0xc0008742c0) (0xc000710000) Stream added, broadcasting: 3\nI0127 12:28:11.817831    3161 log.go:172] (0xc0008742c0) Reply frame received for 3\nI0127 12:28:11.817855    3161 log.go:172] (0xc0008742c0) (0xc00066cdc0) Create stream\nI0127 12:28:11.817865    3161 log.go:172] (0xc0008742c0) (0xc00066cdc0) Stream added, broadcasting: 5\nI0127 12:28:11.818899    3161 log.go:172] (0xc0008742c0) Reply frame received for 5\nI0127 12:28:12.009929    3161 log.go:172] (0xc0008742c0) Data frame received for 3\nI0127 12:28:12.010016    3161 log.go:172] (0xc000710000) (3) Data frame handling\nI0127 12:28:12.010034    3161 log.go:172] (0xc000710000) (3) Data frame sent\nI0127 12:28:12.267176    3161 log.go:172] (0xc0008742c0) Data frame received for 1\nI0127 12:28:12.267296    3161 log.go:172] (0xc0008b45a0) (1) Data frame handling\nI0127 12:28:12.267343    3161 log.go:172] (0xc0008b45a0) (1) Data frame sent\nI0127 12:28:12.267895    3161 log.go:172] (0xc0008742c0) (0xc000710000) Stream removed, broadcasting: 3\nI0127 12:28:12.268075    3161 log.go:172] (0xc0008742c0) (0xc0008b45a0) Stream removed, broadcasting: 1\nI0127 12:28:12.268218    3161 log.go:172] (0xc0008742c0) (0xc00066cdc0) Stream removed, broadcasting: 5\nI0127 12:28:12.268441    3161 log.go:172] (0xc0008742c0) (0xc0008b45a0) Stream removed, broadcasting: 1\nI0127 12:28:12.268549    3161 log.go:172] (0xc0008742c0) (0xc000710000) Stream removed, broadcasting: 3\nI0127 12:28:12.268579    3161 log.go:172] (0xc0008742c0) (0xc00066cdc0) Stream removed, broadcasting: 5\n"
Jan 27 12:28:12.276: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 27 12:28:12.276: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 27 12:28:12.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 27 12:28:12.846: INFO: stderr: "I0127 12:28:12.434543    3182 log.go:172] (0xc0001380b0) (0xc0006e8000) Create stream\nI0127 12:28:12.434689    3182 log.go:172] (0xc0001380b0) (0xc0006e8000) Stream added, broadcasting: 1\nI0127 12:28:12.438476    3182 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0127 12:28:12.438494    3182 log.go:172] (0xc0001380b0) (0xc00029cc80) Create stream\nI0127 12:28:12.438499    3182 log.go:172] (0xc0001380b0) (0xc00029cc80) Stream added, broadcasting: 3\nI0127 12:28:12.439757    3182 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0127 12:28:12.439829    3182 log.go:172] (0xc0001380b0) (0xc00029cdc0) Create stream\nI0127 12:28:12.439839    3182 log.go:172] (0xc0001380b0) (0xc00029cdc0) Stream added, broadcasting: 5\nI0127 12:28:12.440886    3182 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0127 12:28:12.713996    3182 log.go:172] (0xc0001380b0) Data frame received for 3\nI0127 12:28:12.714044    3182 log.go:172] (0xc00029cc80) (3) Data frame handling\nI0127 12:28:12.714078    3182 log.go:172] (0xc00029cc80) (3) Data frame sent\nI0127 12:28:12.841359    3182 log.go:172] (0xc0001380b0) (0xc00029cc80) Stream removed, broadcasting: 3\nI0127 12:28:12.841510    3182 log.go:172] (0xc0001380b0) Data frame received for 1\nI0127 12:28:12.841567    3182 log.go:172] (0xc0006e8000) (1) Data frame handling\nI0127 12:28:12.841607    3182 log.go:172] (0xc0006e8000) (1) Data frame sent\nI0127 12:28:12.841656    3182 log.go:172] (0xc0001380b0) (0xc00029cdc0) Stream removed, broadcasting: 5\nI0127 12:28:12.841691    3182 log.go:172] (0xc0001380b0) (0xc0006e8000) Stream removed, broadcasting: 1\nI0127 12:28:12.841704    3182 log.go:172] (0xc0001380b0) Go away received\nI0127 12:28:12.842031    3182 log.go:172] (0xc0001380b0) (0xc0006e8000) Stream removed, broadcasting: 1\nI0127 12:28:12.842076    3182 log.go:172] (0xc0001380b0) (0xc00029cc80) Stream removed, broadcasting: 3\nI0127 12:28:12.842103    3182 log.go:172] (0xc0001380b0) (0xc00029cdc0) Stream removed, broadcasting: 5\n"
Jan 27 12:28:12.846: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 27 12:28:12.846: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 27 12:28:12.846: INFO: Waiting for statefulset status.replicas updated to 0
Jan 27 12:28:12.932: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 27 12:28:22.957: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 27 12:28:22.957: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 27 12:28:22.957: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 27 12:28:23.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999701s
Jan 27 12:28:24.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.95494062s
Jan 27 12:28:25.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.919980604s
Jan 27 12:28:26.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.904034152s
Jan 27 12:28:27.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.891362881s
Jan 27 12:28:28.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.872221429s
Jan 27 12:28:29.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.851400553s
Jan 27 12:28:30.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.832153951s
Jan 27 12:28:31.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.772946254s
Jan 27 12:28:32.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 750.747144ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4v8rw
Jan 27 12:28:33.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:28:33.906: INFO: stderr: "I0127 12:28:33.483367    3205 log.go:172] (0xc0006a60b0) (0xc000704640) Create stream\nI0127 12:28:33.483572    3205 log.go:172] (0xc0006a60b0) (0xc000704640) Stream added, broadcasting: 1\nI0127 12:28:33.489820    3205 log.go:172] (0xc0006a60b0) Reply frame received for 1\nI0127 12:28:33.489855    3205 log.go:172] (0xc0006a60b0) (0xc00055ce60) Create stream\nI0127 12:28:33.489867    3205 log.go:172] (0xc0006a60b0) (0xc00055ce60) Stream added, broadcasting: 3\nI0127 12:28:33.491164    3205 log.go:172] (0xc0006a60b0) Reply frame received for 3\nI0127 12:28:33.491188    3205 log.go:172] (0xc0006a60b0) (0xc00055cfa0) Create stream\nI0127 12:28:33.491197    3205 log.go:172] (0xc0006a60b0) (0xc00055cfa0) Stream added, broadcasting: 5\nI0127 12:28:33.492102    3205 log.go:172] (0xc0006a60b0) Reply frame received for 5\nI0127 12:28:33.644470    3205 log.go:172] (0xc0006a60b0) Data frame received for 3\nI0127 12:28:33.644625    3205 log.go:172] (0xc00055ce60) (3) Data frame handling\nI0127 12:28:33.644645    3205 log.go:172] (0xc00055ce60) (3) Data frame sent\nI0127 12:28:33.897851    3205 log.go:172] (0xc0006a60b0) Data frame received for 1\nI0127 12:28:33.898028    3205 log.go:172] (0xc000704640) (1) Data frame handling\nI0127 12:28:33.898059    3205 log.go:172] (0xc000704640) (1) Data frame sent\nI0127 12:28:33.899032    3205 log.go:172] (0xc0006a60b0) (0xc000704640) Stream removed, broadcasting: 1\nI0127 12:28:33.900051    3205 log.go:172] (0xc0006a60b0) (0xc00055ce60) Stream removed, broadcasting: 3\nI0127 12:28:33.900232    3205 log.go:172] (0xc0006a60b0) (0xc00055cfa0) Stream removed, broadcasting: 5\nI0127 12:28:33.900336    3205 log.go:172] (0xc0006a60b0) Go away received\nI0127 12:28:33.900402    3205 log.go:172] (0xc0006a60b0) (0xc000704640) Stream removed, broadcasting: 1\nI0127 12:28:33.900454    3205 log.go:172] (0xc0006a60b0) (0xc00055ce60) Stream removed, broadcasting: 3\nI0127 12:28:33.900491    3205 log.go:172] (0xc0006a60b0) (0xc00055cfa0) Stream removed, broadcasting: 5\n"
Jan 27 12:28:33.906: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 27 12:28:33.906: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 27 12:28:33.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:28:34.609: INFO: stderr: "I0127 12:28:34.133578    3227 log.go:172] (0xc0006fc0b0) (0xc00071e5a0) Create stream\nI0127 12:28:34.133863    3227 log.go:172] (0xc0006fc0b0) (0xc00071e5a0) Stream added, broadcasting: 1\nI0127 12:28:34.140376    3227 log.go:172] (0xc0006fc0b0) Reply frame received for 1\nI0127 12:28:34.140437    3227 log.go:172] (0xc0006fc0b0) (0xc00073e000) Create stream\nI0127 12:28:34.140456    3227 log.go:172] (0xc0006fc0b0) (0xc00073e000) Stream added, broadcasting: 3\nI0127 12:28:34.144066    3227 log.go:172] (0xc0006fc0b0) Reply frame received for 3\nI0127 12:28:34.144097    3227 log.go:172] (0xc0006fc0b0) (0xc0005e8fa0) Create stream\nI0127 12:28:34.144119    3227 log.go:172] (0xc0006fc0b0) (0xc0005e8fa0) Stream added, broadcasting: 5\nI0127 12:28:34.148699    3227 log.go:172] (0xc0006fc0b0) Reply frame received for 5\nI0127 12:28:34.388866    3227 log.go:172] (0xc0006fc0b0) Data frame received for 3\nI0127 12:28:34.388919    3227 log.go:172] (0xc00073e000) (3) Data frame handling\nI0127 12:28:34.388937    3227 log.go:172] (0xc00073e000) (3) Data frame sent\nI0127 12:28:34.603925    3227 log.go:172] (0xc0006fc0b0) (0xc00073e000) Stream removed, broadcasting: 3\nI0127 12:28:34.604216    3227 log.go:172] (0xc0006fc0b0) Data frame received for 1\nI0127 12:28:34.604239    3227 log.go:172] (0xc00071e5a0) (1) Data frame handling\nI0127 12:28:34.604252    3227 log.go:172] (0xc00071e5a0) (1) Data frame sent\nI0127 12:28:34.604263    3227 log.go:172] (0xc0006fc0b0) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0127 12:28:34.604327    3227 log.go:172] (0xc0006fc0b0) (0xc0005e8fa0) Stream removed, broadcasting: 5\nI0127 12:28:34.604376    3227 log.go:172] (0xc0006fc0b0) Go away received\nI0127 12:28:34.604511    3227 log.go:172] (0xc0006fc0b0) (0xc00071e5a0) Stream removed, broadcasting: 1\nI0127 12:28:34.604529    3227 log.go:172] (0xc0006fc0b0) (0xc00073e000) Stream removed, broadcasting: 3\nI0127 12:28:34.604539    3227 log.go:172] (0xc0006fc0b0) (0xc0005e8fa0) Stream removed, broadcasting: 5\n"
Jan 27 12:28:34.609: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 27 12:28:34.609: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 27 12:28:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:28:35.248: INFO: rc: 126
Jan 27 12:28:35.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown
 I0127 12:28:34.945390    3249 log.go:172] (0xc0008402c0) (0xc000712640) Create stream
I0127 12:28:34.945604    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream added, broadcasting: 1
I0127 12:28:34.956030    3249 log.go:172] (0xc0008402c0) Reply frame received for 1
I0127 12:28:34.956110    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Create stream
I0127 12:28:34.956120    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream added, broadcasting: 3
I0127 12:28:34.968473    3249 log.go:172] (0xc0008402c0) Reply frame received for 3
I0127 12:28:34.968528    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Create stream
I0127 12:28:34.968541    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream added, broadcasting: 5
I0127 12:28:34.969838    3249 log.go:172] (0xc0008402c0) Reply frame received for 5
I0127 12:28:35.240114    3249 log.go:172] (0xc0008402c0) Data frame received for 3
I0127 12:28:35.240175    3249 log.go:172] (0xc00030ed20) (3) Data frame handling
I0127 12:28:35.240189    3249 log.go:172] (0xc00030ed20) (3) Data frame sent
I0127 12:28:35.243611    3249 log.go:172] (0xc0008402c0) Data frame received for 1
I0127 12:28:35.243621    3249 log.go:172] (0xc000712640) (1) Data frame handling
I0127 12:28:35.243627    3249 log.go:172] (0xc000712640) (1) Data frame sent
I0127 12:28:35.243633    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream removed, broadcasting: 1
I0127 12:28:35.244315    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream removed, broadcasting: 3
I0127 12:28:35.244341    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream removed, broadcasting: 5
I0127 12:28:35.244366    3249 log.go:172] (0xc0008402c0) Go away received
I0127 12:28:35.244398    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream removed, broadcasting: 1
I0127 12:28:35.244412    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream removed, broadcasting: 3
I0127 12:28:35.244419    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc000ba3dd0 exit status 126   true [0xc000ed8750 0xc000ed8768 0xc000ed8780] [0xc000ed8750 0xc000ed8768 0xc000ed8780] [0xc000ed8760 0xc000ed8778] [0x935700 0x935700] 0xc001ca8600 }:
Command stdout:
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown

stderr:
I0127 12:28:34.945390    3249 log.go:172] (0xc0008402c0) (0xc000712640) Create stream
I0127 12:28:34.945604    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream added, broadcasting: 1
I0127 12:28:34.956030    3249 log.go:172] (0xc0008402c0) Reply frame received for 1
I0127 12:28:34.956110    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Create stream
I0127 12:28:34.956120    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream added, broadcasting: 3
I0127 12:28:34.968473    3249 log.go:172] (0xc0008402c0) Reply frame received for 3
I0127 12:28:34.968528    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Create stream
I0127 12:28:34.968541    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream added, broadcasting: 5
I0127 12:28:34.969838    3249 log.go:172] (0xc0008402c0) Reply frame received for 5
I0127 12:28:35.240114    3249 log.go:172] (0xc0008402c0) Data frame received for 3
I0127 12:28:35.240175    3249 log.go:172] (0xc00030ed20) (3) Data frame handling
I0127 12:28:35.240189    3249 log.go:172] (0xc00030ed20) (3) Data frame sent
I0127 12:28:35.243611    3249 log.go:172] (0xc0008402c0) Data frame received for 1
I0127 12:28:35.243621    3249 log.go:172] (0xc000712640) (1) Data frame handling
I0127 12:28:35.243627    3249 log.go:172] (0xc000712640) (1) Data frame sent
I0127 12:28:35.243633    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream removed, broadcasting: 1
I0127 12:28:35.244315    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream removed, broadcasting: 3
I0127 12:28:35.244341    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream removed, broadcasting: 5
I0127 12:28:35.244366    3249 log.go:172] (0xc0008402c0) Go away received
I0127 12:28:35.244398    3249 log.go:172] (0xc0008402c0) (0xc000712640) Stream removed, broadcasting: 1
I0127 12:28:35.244412    3249 log.go:172] (0xc0008402c0) (0xc00030ed20) Stream removed, broadcasting: 3
I0127 12:28:35.244419    3249 log.go:172] (0xc0008402c0) (0xc0003a4000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Jan 27 12:28:45.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:28:45.350: INFO: rc: 1
Jan 27 12:28:45.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0010616b0 exit status 1   true [0xc001c7c3a8 0xc001c7c3c8 0xc001c7c3e0] [0xc001c7c3a8 0xc001c7c3c8 0xc001c7c3e0] [0xc001c7c3c0 0xc001c7c3d8] [0x935700 0x935700] 0xc000badf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:28:55.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:28:56.407: INFO: rc: 1
Jan 27 12:28:56.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0014d3140 exit status 1   true [0xc00130e690 0xc00130e6a8 0xc00130e6c0] [0xc00130e690 0xc00130e6a8 0xc00130e6c0] [0xc00130e6a0 0xc00130e6b8] [0x935700 0x935700] 0xc0008c8ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:06.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:06.557: INFO: rc: 1
Jan 27 12:29:06.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a98120 exit status 1   true [0xc0004f5978 0xc0004f5a08 0xc0004f5b50] [0xc0004f5978 0xc0004f5a08 0xc0004f5b50] [0xc0004f59f8 0xc0004f5a30] [0x935700 0x935700] 0xc0008e0960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:16.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:16.720: INFO: rc: 1
Jan 27 12:29:16.720: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a98270 exit status 1   true [0xc0004f5c00 0xc0004f5cc8 0xc0004f5d98] [0xc0004f5c00 0xc0004f5cc8 0xc0004f5d98] [0xc0004f5cc0 0xc0004f5d58] [0x935700 0x935700] 0xc0008e0ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:26.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:26.835: INFO: rc: 1
Jan 27 12:29:26.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a1b0 exit status 1   true [0xc000ed8000 0xc000ed8018 0xc000ed8030] [0xc000ed8000 0xc000ed8018 0xc000ed8030] [0xc000ed8010 0xc000ed8028] [0x935700 0x935700] 0xc000bac5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:36.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:36.962: INFO: rc: 1
Jan 27 12:29:36.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a3c0 exit status 1   true [0xc000ed8038 0xc000ed8050 0xc000ed8068] [0xc000ed8038 0xc000ed8050 0xc000ed8068] [0xc000ed8048 0xc000ed8060] [0x935700 0x935700] 0xc000badaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:46.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:47.104: INFO: rc: 1
Jan 27 12:29:47.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a5d0 exit status 1   true [0xc000ed8070 0xc000ed8088 0xc000ed80a0] [0xc000ed8070 0xc000ed8088 0xc000ed80a0] [0xc000ed8080 0xc000ed8098] [0x935700 0x935700] 0xc000badda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:29:57.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:29:57.246: INFO: rc: 1
Jan 27 12:29:57.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be2120 exit status 1   true [0xc00016e0d8 0xc00016e230 0xc00016e300] [0xc00016e0d8 0xc00016e230 0xc00016e300] [0xc00016e0f8 0xc00016e288] [0x935700 0x935700] 0xc001398420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:07.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:07.412: INFO: rc: 1
Jan 27 12:30:07.412: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a7b0 exit status 1   true [0xc000ed80a8 0xc000ed80c0 0xc000ed80d8] [0xc000ed80a8 0xc000ed80c0 0xc000ed80d8] [0xc000ed80b8 0xc000ed80d0] [0x935700 0x935700] 0xc00119ca80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:17.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:17.545: INFO: rc: 1
Jan 27 12:30:17.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001609110 exit status 1   true [0xc001c7c000 0xc001c7c018 0xc001c7c030] [0xc001c7c000 0xc001c7c018 0xc001c7c030] [0xc001c7c010 0xc001c7c028] [0x935700 0x935700] 0xc001c4cf60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:27.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:27.683: INFO: rc: 1
Jan 27 12:30:27.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a990 exit status 1   true [0xc000ed80e0 0xc000ed80f8 0xc000ed8110] [0xc000ed80e0 0xc000ed80f8 0xc000ed8110] [0xc000ed80f0 0xc000ed8108] [0x935700 0x935700] 0xc00119cd20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:37.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:37.851: INFO: rc: 1
Jan 27 12:30:37.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be2300 exit status 1   true [0xc00016e320 0xc00016e370 0xc00016e470] [0xc00016e320 0xc00016e370 0xc00016e470] [0xc00016e350 0xc00016e410] [0x935700 0x935700] 0xc001398ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:47.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:48.001: INFO: rc: 1
Jan 27 12:30:48.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199ab70 exit status 1   true [0xc000ed8118 0xc000ed8130 0xc000ed8148] [0xc000ed8118 0xc000ed8130 0xc000ed8148] [0xc000ed8128 0xc000ed8140] [0x935700 0x935700] 0xc001a748a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:30:58.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:30:58.164: INFO: rc: 1
Jan 27 12:30:58.164: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199ac90 exit status 1   true [0xc000ed8150 0xc000ed8168 0xc000ed8180] [0xc000ed8150 0xc000ed8168 0xc000ed8180] [0xc000ed8160 0xc000ed8178] [0x935700 0x935700] 0xc001a5bec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:08.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:08.338: INFO: rc: 1
Jan 27 12:31:08.338: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a98150 exit status 1   true [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f5a08 0xc0004f5b50] [0x935700 0x935700] 0xc001a759e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:18.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:18.529: INFO: rc: 1
Jan 27 12:31:18.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001609140 exit status 1   true [0xc000ed8000 0xc000ed8018 0xc000ed8030] [0xc000ed8000 0xc000ed8018 0xc000ed8030] [0xc000ed8010 0xc000ed8028] [0x935700 0x935700] 0xc00119ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:28.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:28.663: INFO: rc: 1
Jan 27 12:31:28.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001609650 exit status 1   true [0xc000ed8038 0xc000ed8050 0xc000ed8068] [0xc000ed8038 0xc000ed8050 0xc000ed8068] [0xc000ed8048 0xc000ed8060] [0x935700 0x935700] 0xc00119ccc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:38.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:38.753: INFO: rc: 1
Jan 27 12:31:38.753: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a1e0 exit status 1   true [0xc001c7c000 0xc001c7c018 0xc001c7c030] [0xc001c7c000 0xc001c7c018 0xc001c7c030] [0xc001c7c010 0xc001c7c028] [0x935700 0x935700] 0xc000bac5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:48.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:48.854: INFO: rc: 1
Jan 27 12:31:48.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001609800 exit status 1   true [0xc000ed8070 0xc000ed8088 0xc000ed80a0] [0xc000ed8070 0xc000ed8088 0xc000ed80a0] [0xc000ed8080 0xc000ed8098] [0x935700 0x935700] 0xc0008e0060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:31:58.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:31:58.970: INFO: rc: 1
Jan 27 12:31:58.970: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a982d0 exit status 1   true [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5cc8 0xc0004f5d98] [0x935700 0x935700] 0xc001c4cf60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:08.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:09.124: INFO: rc: 1
Jan 27 12:32:09.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be21b0 exit status 1   true [0xc00016e000 0xc00016e0f8 0xc00016e288] [0xc00016e000 0xc00016e0f8 0xc00016e288] [0xc00016e0e8 0xc00016e240] [0x935700 0x935700] 0xc001398420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:19.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:19.261: INFO: rc: 1
Jan 27 12:32:19.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a3f0 exit status 1   true [0xc001c7c038 0xc001c7c050 0xc001c7c068] [0xc001c7c038 0xc001c7c050 0xc001c7c068] [0xc001c7c048 0xc001c7c060] [0x935700 0x935700] 0xc000badaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:29.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:29.402: INFO: rc: 1
Jan 27 12:32:29.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a983f0 exit status 1   true [0xc0004f5e20 0xc00130e000 0xc00130e018] [0xc0004f5e20 0xc00130e000 0xc00130e018] [0xc0004f5f20 0xc00130e010] [0x935700 0x935700] 0xc001c4d380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:39.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:39.530: INFO: rc: 1
Jan 27 12:32:39.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be2360 exit status 1   true [0xc00016e300 0xc00016e350 0xc00016e410] [0xc00016e300 0xc00016e350 0xc00016e410] [0xc00016e328 0xc00016e408] [0x935700 0x935700] 0xc001398ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:49.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:49.666: INFO: rc: 1
Jan 27 12:32:49.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a6c0 exit status 1   true [0xc001c7c070 0xc001c7c088 0xc001c7c0a0] [0xc001c7c070 0xc001c7c088 0xc001c7c0a0] [0xc001c7c080 0xc001c7c098] [0x935700 0x935700] 0xc000badda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:32:59.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:32:59.803: INFO: rc: 1
Jan 27 12:32:59.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a870 exit status 1   true [0xc001c7c0a8 0xc001c7c0c0 0xc001c7c0d8] [0xc001c7c0a8 0xc001c7c0c0 0xc001c7c0d8] [0xc001c7c0b8 0xc001c7c0d0] [0x935700 0x935700] 0xc0019f7860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:33:09.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:33:09.951: INFO: rc: 1
Jan 27 12:33:09.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be2120 exit status 1   true [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f59c8 0xc0004f5a20 0xc0004f5c00] [0xc0004f5a08 0xc0004f5b50] [0x935700 0x935700] 0xc000bac5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:33:19.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:33:20.077: INFO: rc: 1
Jan 27 12:33:20.078: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be2300 exit status 1   true [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5c28 0xc0004f5cf8 0xc0004f5db8] [0xc0004f5cc8 0xc0004f5d98] [0x935700 0x935700] 0xc000badaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:33:30.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:33:30.176: INFO: rc: 1
Jan 27 12:33:30.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00199a1b0 exit status 1   true [0xc00016e000 0xc00016e0f8 0xc00016e288] [0xc00016e000 0xc00016e0f8 0xc00016e288] [0xc00016e0e8 0xc00016e240] [0x935700 0x935700] 0xc00119cb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan 27 12:33:40.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4v8rw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 27 12:33:40.302: INFO: rc: 1
Jan 27 12:33:40.302: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 27 12:33:40.302: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 27 12:33:40.326: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4v8rw
Jan 27 12:33:40.331: INFO: Scaling statefulset ss to 0
Jan 27 12:33:40.346: INFO: Waiting for statefulset status.replicas updated to 0
Jan 27 12:33:40.349: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:33:40.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4v8rw" for this suite.
Jan 27 12:33:46.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:33:46.818: INFO: namespace: e2e-tests-statefulset-4v8rw, resource: bindings, ignored listing per whitelist
Jan 27 12:33:46.866: INFO: namespace e2e-tests-statefulset-4v8rw deletion completed in 6.259596616s

• [SLOW TEST:398.007 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:33:46.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 27 12:33:47.032: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636264,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 12:33:47.032: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636264,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 27 12:33:57.081: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636277,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 27 12:33:57.082: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636277,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 27 12:34:07.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636290,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 12:34:07.121: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636290,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 27 12:34:17.155: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636303,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 12:34:17.155: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-a,UID:436e8f93-4101-11ea-a994-fa163e34d433,ResourceVersion:19636303,Generation:0,CreationTimestamp:2020-01-27 12:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 27 12:34:27.197: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-b,UID:5b5b494e-4101-11ea-a994-fa163e34d433,ResourceVersion:19636316,Generation:0,CreationTimestamp:2020-01-27 12:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 12:34:27.197: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-b,UID:5b5b494e-4101-11ea-a994-fa163e34d433,ResourceVersion:19636316,Generation:0,CreationTimestamp:2020-01-27 12:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 27 12:34:37.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-b,UID:5b5b494e-4101-11ea-a994-fa163e34d433,ResourceVersion:19636329,Generation:0,CreationTimestamp:2020-01-27 12:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 12:34:37.223: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zjsdw,SelfLink:/api/v1/namespaces/e2e-tests-watch-zjsdw/configmaps/e2e-watch-test-configmap-b,UID:5b5b494e-4101-11ea-a994-fa163e34d433,ResourceVersion:19636329,Generation:0,CreationTimestamp:2020-01-27 12:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:34:47.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zjsdw" for this suite.
Jan 27 12:34:53.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:34:53.492: INFO: namespace: e2e-tests-watch-zjsdw, resource: bindings, ignored listing per whitelist
Jan 27 12:34:53.509: INFO: namespace e2e-tests-watch-zjsdw deletion completed in 6.262893664s

• [SLOW TEST:66.642 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:34:53.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 27 12:35:05.807: INFO: Pod pod-hostip-6b309243-4101-11ea-b652-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:35:05.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xskc6" for this suite.
Jan 27 12:35:29.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:35:30.144: INFO: namespace: e2e-tests-pods-xskc6, resource: bindings, ignored listing per whitelist
Jan 27 12:35:30.154: INFO: namespace e2e-tests-pods-xskc6 deletion completed in 24.340051124s

• [SLOW TEST:36.644 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:35:30.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 27 12:35:31.019: INFO: Waiting up to 5m0s for pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm" in namespace "e2e-tests-svcaccounts-vjlts" to be "success or failure"
Jan 27 12:35:31.068: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 49.086012ms
Jan 27 12:35:33.084: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065557746s
Jan 27 12:35:35.108: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088938205s
Jan 27 12:35:37.661: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642356126s
Jan 27 12:35:40.372: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.353629281s
Jan 27 12:35:42.443: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.424058335s
Jan 27 12:35:44.585: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.566753589s
Jan 27 12:35:46.597: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Pending", Reason="", readiness=false. Elapsed: 15.57822475s
Jan 27 12:35:48.627: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.608714443s
STEP: Saw pod success
Jan 27 12:35:48.628: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm" satisfied condition "success or failure"
Jan 27 12:35:48.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm container token-test: 
STEP: delete the pod
Jan 27 12:35:49.413: INFO: Waiting for pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm to disappear
Jan 27 12:35:49.481: INFO: Pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-hvmcm no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 27 12:35:49.509: INFO: Waiting up to 5m0s for pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c" in namespace "e2e-tests-svcaccounts-vjlts" to be "success or failure"
Jan 27 12:35:49.536: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.604195ms
Jan 27 12:35:51.649: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139975363s
Jan 27 12:35:53.664: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155727845s
Jan 27 12:35:56.150: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641293307s
Jan 27 12:35:58.478: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.969126508s
Jan 27 12:36:00.505: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.996125177s
Jan 27 12:36:02.606: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.097895645s
Jan 27 12:36:04.638: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.129807696s
Jan 27 12:36:06.758: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.24962768s
STEP: Saw pod success
Jan 27 12:36:06.758: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c" satisfied condition "success or failure"
Jan 27 12:36:06.786: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c container root-ca-test: 
STEP: delete the pod
Jan 27 12:36:06.933: INFO: Waiting for pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c to disappear
Jan 27 12:36:06.952: INFO: Pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-f974c no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 27 12:36:06.984: INFO: Waiting up to 5m0s for pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj" in namespace "e2e-tests-svcaccounts-vjlts" to be "success or failure"
Jan 27 12:36:07.132: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 147.574932ms
Jan 27 12:36:09.143: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158822017s
Jan 27 12:36:11.163: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178354547s
Jan 27 12:36:14.113: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.128452618s
Jan 27 12:36:16.156: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 9.171650481s
Jan 27 12:36:18.175: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 11.190648028s
Jan 27 12:36:20.469: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.48478993s
Jan 27 12:36:22.495: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.510496428s
Jan 27 12:36:24.550: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.566034719s
STEP: Saw pod success
Jan 27 12:36:24.550: INFO: Pod "pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj" satisfied condition "success or failure"
Jan 27 12:36:24.567: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj container namespace-test: 
STEP: delete the pod
Jan 27 12:36:25.641: INFO: Waiting for pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj to disappear
Jan 27 12:36:25.654: INFO: Pod pod-service-account-816582b6-4101-11ea-b652-0242ac110005-8pqlj no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:36:25.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-vjlts" for this suite.
Jan 27 12:36:33.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:36:33.923: INFO: namespace: e2e-tests-svcaccounts-vjlts, resource: bindings, ignored listing per whitelist
Jan 27 12:36:34.017: INFO: namespace e2e-tests-svcaccounts-vjlts deletion completed in 8.348108077s

• [SLOW TEST:63.863 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:36:34.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a717699a-4101-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 12:36:34.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-llsxr" to be "success or failure"
Jan 27 12:36:34.265: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631745ms
Jan 27 12:36:36.348: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094011712s
Jan 27 12:36:38.455: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201277393s
Jan 27 12:36:40.903: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648625024s
Jan 27 12:36:42.910: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.655500895s
STEP: Saw pod success
Jan 27 12:36:42.910: INFO: Pod "pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:36:42.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 12:36:43.113: INFO: Waiting for pod pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005 to disappear
Jan 27 12:36:43.165: INFO: Pod pod-projected-configmaps-a7181fd8-4101-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:36:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-llsxr" for this suite.
Jan 27 12:36:49.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:36:49.445: INFO: namespace: e2e-tests-projected-llsxr, resource: bindings, ignored listing per whitelist
Jan 27 12:36:49.491: INFO: namespace e2e-tests-projected-llsxr deletion completed in 6.31619373s

• [SLOW TEST:15.474 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:36:49.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 27 12:36:50.808: INFO: Pod name wrapped-volume-race-b0ea1415-4101-11ea-b652-0242ac110005: Found 0 pods out of 5
Jan 27 12:36:55.836: INFO: Pod name wrapped-volume-race-b0ea1415-4101-11ea-b652-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b0ea1415-4101-11ea-b652-0242ac110005 in namespace e2e-tests-emptydir-wrapper-z5959, will wait for the garbage collector to delete the pods
Jan 27 12:39:20.032: INFO: Deleting ReplicationController wrapped-volume-race-b0ea1415-4101-11ea-b652-0242ac110005 took: 26.637322ms
Jan 27 12:39:20.433: INFO: Terminating ReplicationController wrapped-volume-race-b0ea1415-4101-11ea-b652-0242ac110005 pods took: 400.53453ms
STEP: Creating RC which spawns configmap-volume pods
Jan 27 12:40:13.260: INFO: Pod name wrapped-volume-race-299112f0-4102-11ea-b652-0242ac110005: Found 0 pods out of 5
Jan 27 12:40:18.300: INFO: Pod name wrapped-volume-race-299112f0-4102-11ea-b652-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-299112f0-4102-11ea-b652-0242ac110005 in namespace e2e-tests-emptydir-wrapper-z5959, will wait for the garbage collector to delete the pods
Jan 27 12:42:10.603: INFO: Deleting ReplicationController wrapped-volume-race-299112f0-4102-11ea-b652-0242ac110005 took: 46.045993ms
Jan 27 12:42:11.203: INFO: Terminating ReplicationController wrapped-volume-race-299112f0-4102-11ea-b652-0242ac110005 pods took: 600.624062ms
STEP: Creating RC which spawns configmap-volume pods
Jan 27 12:43:02.969: INFO: Pod name wrapped-volume-race-8ec200f6-4102-11ea-b652-0242ac110005: Found 0 pods out of 5
Jan 27 12:43:08.003: INFO: Pod name wrapped-volume-race-8ec200f6-4102-11ea-b652-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8ec200f6-4102-11ea-b652-0242ac110005 in namespace e2e-tests-emptydir-wrapper-z5959, will wait for the garbage collector to delete the pods
Jan 27 12:45:04.148: INFO: Deleting ReplicationController wrapped-volume-race-8ec200f6-4102-11ea-b652-0242ac110005 took: 28.301454ms
Jan 27 12:45:04.348: INFO: Terminating ReplicationController wrapped-volume-race-8ec200f6-4102-11ea-b652-0242ac110005 pods took: 200.293931ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:45:55.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-z5959" for this suite.
Jan 27 12:46:05.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:46:05.304: INFO: namespace: e2e-tests-emptydir-wrapper-z5959, resource: bindings, ignored listing per whitelist
Jan 27 12:46:05.323: INFO: namespace e2e-tests-emptydir-wrapper-z5959 deletion completed in 10.198350377s

• [SLOW TEST:555.832 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:46:05.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 27 12:46:05.531: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sbg5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-sbg5g/configmaps/e2e-watch-test-watch-closed,UID:fb94e789-4102-11ea-a994-fa163e34d433,ResourceVersion:19637709,Generation:0,CreationTimestamp:2020-01-27 12:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 27 12:46:05.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sbg5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-sbg5g/configmaps/e2e-watch-test-watch-closed,UID:fb94e789-4102-11ea-a994-fa163e34d433,ResourceVersion:19637710,Generation:0,CreationTimestamp:2020-01-27 12:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 27 12:46:05.562: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sbg5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-sbg5g/configmaps/e2e-watch-test-watch-closed,UID:fb94e789-4102-11ea-a994-fa163e34d433,ResourceVersion:19637711,Generation:0,CreationTimestamp:2020-01-27 12:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 27 12:46:05.563: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sbg5g,SelfLink:/api/v1/namespaces/e2e-tests-watch-sbg5g/configmaps/e2e-watch-test-watch-closed,UID:fb94e789-4102-11ea-a994-fa163e34d433,ResourceVersion:19637712,Generation:0,CreationTimestamp:2020-01-27 12:46:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:46:05.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-sbg5g" for this suite.
Jan 27 12:46:14.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:46:14.508: INFO: namespace: e2e-tests-watch-sbg5g, resource: bindings, ignored listing per whitelist
Jan 27 12:46:14.594: INFO: namespace e2e-tests-watch-sbg5g deletion completed in 9.024525562s

• [SLOW TEST:9.271 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:46:14.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 12:46:14.868: INFO: Waiting up to 5m0s for pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-vp7wt" to be "success or failure"
Jan 27 12:46:14.877: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890872ms
Jan 27 12:46:16.892: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024075891s
Jan 27 12:46:18.902: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034142415s
Jan 27 12:46:20.926: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057549728s
Jan 27 12:46:23.418: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.549895619s
Jan 27 12:46:25.430: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.562362165s
STEP: Saw pod success
Jan 27 12:46:25.431: INFO: Pod "downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:46:25.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 12:46:25.493: INFO: Waiting for pod downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:46:25.500: INFO: Pod downwardapi-volume-012976fe-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:46:25.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vp7wt" for this suite.
Jan 27 12:46:32.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:46:32.404: INFO: namespace: e2e-tests-projected-vp7wt, resource: bindings, ignored listing per whitelist
Jan 27 12:46:32.547: INFO: namespace e2e-tests-projected-vp7wt deletion completed in 7.038494183s

• [SLOW TEST:17.953 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:46:32.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 12:46:32.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-bxbgq" to be "success or failure"
Jan 27 12:46:33.004: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.355367ms
Jan 27 12:46:35.249: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2824466s
Jan 27 12:46:37.265: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298401486s
Jan 27 12:46:39.280: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313286667s
Jan 27 12:46:41.346: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.378955343s
STEP: Saw pod success
Jan 27 12:46:41.346: INFO: Pod "downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:46:41.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 12:46:41.596: INFO: Waiting for pod downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:46:41.604: INFO: Pod downwardapi-volume-0bf37bf5-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:46:41.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bxbgq" for this suite.
Jan 27 12:46:47.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:46:47.746: INFO: namespace: e2e-tests-downward-api-bxbgq, resource: bindings, ignored listing per whitelist
Jan 27 12:46:47.801: INFO: namespace e2e-tests-downward-api-bxbgq deletion completed in 6.190960368s

• [SLOW TEST:15.252 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:46:47.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 27 12:46:48.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 27 12:46:50.055: INFO: stderr: ""
Jan 27 12:46:50.055: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:46:50.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-62gvx" for this suite.
Jan 27 12:46:56.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:46:56.173: INFO: namespace: e2e-tests-kubectl-62gvx, resource: bindings, ignored listing per whitelist
Jan 27 12:46:56.286: INFO: namespace e2e-tests-kubectl-62gvx deletion completed in 6.219447601s

• [SLOW TEST:8.485 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:46:56.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 27 12:47:04.315: INFO: 2 pods remaining
Jan 27 12:47:04.315: INFO: 0 pods has nil DeletionTimestamp
Jan 27 12:47:04.315: INFO: 
STEP: Gathering metrics
W0127 12:47:05.024500       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 12:47:05.024: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:47:05.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vbmd2" for this suite.
Jan 27 12:47:17.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:47:17.214: INFO: namespace: e2e-tests-gc-vbmd2, resource: bindings, ignored listing per whitelist
Jan 27 12:47:17.246: INFO: namespace e2e-tests-gc-vbmd2 deletion completed in 12.215181479s

• [SLOW TEST:20.960 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:47:17.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2676a71e-4103-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 12:47:17.450: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-t4d2d" to be "success or failure"
Jan 27 12:47:17.532: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.958074ms
Jan 27 12:47:19.730: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280197713s
Jan 27 12:47:21.758: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308389837s
Jan 27 12:47:24.018: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.568468798s
Jan 27 12:47:26.030: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580252598s
Jan 27 12:47:28.051: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.600690344s
STEP: Saw pod success
Jan 27 12:47:28.051: INFO: Pod "pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:47:28.057: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 27 12:47:28.137: INFO: Waiting for pod pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:47:28.193: INFO: Pod pod-projected-configmaps-2677e683-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:47:28.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t4d2d" for this suite.
Jan 27 12:47:34.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:47:34.354: INFO: namespace: e2e-tests-projected-t4d2d, resource: bindings, ignored listing per whitelist
Jan 27 12:47:34.357: INFO: namespace e2e-tests-projected-t4d2d deletion completed in 6.15384423s

• [SLOW TEST:17.112 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:47:34.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 27 12:47:34.528: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 27 12:47:34.544: INFO: Waiting for terminating namespaces to be deleted...
Jan 27 12:47:34.547: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 27 12:47:34.561: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 27 12:47:34.561: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 27 12:47:34.561: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 12:47:34.561: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 27 12:47:34.561: INFO: 	Container weave ready: true, restart count 0
Jan 27 12:47:34.562: INFO: 	Container weave-npc ready: true, restart count 0
Jan 27 12:47:34.562: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 27 12:47:34.562: INFO: 	Container coredns ready: true, restart count 0
Jan 27 12:47:34.562: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 12:47:34.562: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 12:47:34.562: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 27 12:47:34.562: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 27 12:47:34.562: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 27 12:47:34.724: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30c69646-4103-11ea-b652-0242ac110005.15edbf2f713251a2], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-kk4gw/filler-pod-30c69646-4103-11ea-b652-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30c69646-4103-11ea-b652-0242ac110005.15edbf3065516ac8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30c69646-4103-11ea-b652-0242ac110005.15edbf30be857587], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30c69646-4103-11ea-b652-0242ac110005.15edbf30ede074ef], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15edbf3150e9766f], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:47:43.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-kk4gw" for this suite.
Jan 27 12:47:52.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:47:52.303: INFO: namespace: e2e-tests-sched-pred-kk4gw, resource: bindings, ignored listing per whitelist
Jan 27 12:47:52.405: INFO: namespace e2e-tests-sched-pred-kk4gw deletion completed in 8.43445831s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.048 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:47:52.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n5dfp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 12:47:52.746: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 12:48:27.070: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-n5dfp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 12:48:27.070: INFO: >>> kubeConfig: /root/.kube/config
I0127 12:48:27.171803       8 log.go:172] (0xc0026f82c0) (0xc0010e9e00) Create stream
I0127 12:48:27.171910       8 log.go:172] (0xc0026f82c0) (0xc0010e9e00) Stream added, broadcasting: 1
I0127 12:48:27.181515       8 log.go:172] (0xc0026f82c0) Reply frame received for 1
I0127 12:48:27.181609       8 log.go:172] (0xc0026f82c0) (0xc001bb1f40) Create stream
I0127 12:48:27.181640       8 log.go:172] (0xc0026f82c0) (0xc001bb1f40) Stream added, broadcasting: 3
I0127 12:48:27.183388       8 log.go:172] (0xc0026f82c0) Reply frame received for 3
I0127 12:48:27.183432       8 log.go:172] (0xc0026f82c0) (0xc001c50000) Create stream
I0127 12:48:27.183448       8 log.go:172] (0xc0026f82c0) (0xc001c50000) Stream added, broadcasting: 5
I0127 12:48:27.184817       8 log.go:172] (0xc0026f82c0) Reply frame received for 5
I0127 12:48:27.382045       8 log.go:172] (0xc0026f82c0) Data frame received for 3
I0127 12:48:27.382098       8 log.go:172] (0xc001bb1f40) (3) Data frame handling
I0127 12:48:27.382129       8 log.go:172] (0xc001bb1f40) (3) Data frame sent
I0127 12:48:27.497248       8 log.go:172] (0xc0026f82c0) (0xc001bb1f40) Stream removed, broadcasting: 3
I0127 12:48:27.497407       8 log.go:172] (0xc0026f82c0) Data frame received for 1
I0127 12:48:27.497441       8 log.go:172] (0xc0010e9e00) (1) Data frame handling
I0127 12:48:27.497498       8 log.go:172] (0xc0010e9e00) (1) Data frame sent
I0127 12:48:27.497521       8 log.go:172] (0xc0026f82c0) (0xc0010e9e00) Stream removed, broadcasting: 1
I0127 12:48:27.497545       8 log.go:172] (0xc0026f82c0) (0xc001c50000) Stream removed, broadcasting: 5
I0127 12:48:27.497620       8 log.go:172] (0xc0026f82c0) Go away received
I0127 12:48:27.497851       8 log.go:172] (0xc0026f82c0) (0xc0010e9e00) Stream removed, broadcasting: 1
I0127 12:48:27.497867       8 log.go:172] (0xc0026f82c0) (0xc001bb1f40) Stream removed, broadcasting: 3
I0127 12:48:27.497876       8 log.go:172] (0xc0026f82c0) (0xc001c50000) Stream removed, broadcasting: 5
Jan 27 12:48:27.497: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:48:27.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-n5dfp" for this suite.
Jan 27 12:48:51.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:48:51.720: INFO: namespace: e2e-tests-pod-network-test-n5dfp, resource: bindings, ignored listing per whitelist
Jan 27 12:48:51.908: INFO: namespace e2e-tests-pod-network-test-n5dfp deletion completed in 24.3953266s

• [SLOW TEST:59.502 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:48:51.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 27 12:48:52.445: INFO: Waiting up to 5m0s for pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-h4zgs" to be "success or failure"
Jan 27 12:48:52.480: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.093801ms
Jan 27 12:48:54.700: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254314424s
Jan 27 12:48:56.724: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278190069s
Jan 27 12:48:58.789: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343982296s
Jan 27 12:49:00.809: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363285107s
Jan 27 12:49:02.829: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.383197928s
STEP: Saw pod success
Jan 27 12:49:02.829: INFO: Pod "pod-5f01f4f4-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:49:02.835: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5f01f4f4-4103-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 12:49:02.910: INFO: Waiting for pod pod-5f01f4f4-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:49:02.931: INFO: Pod pod-5f01f4f4-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:49:02.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h4zgs" for this suite.
Jan 27 12:49:09.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:49:09.166: INFO: namespace: e2e-tests-emptydir-h4zgs, resource: bindings, ignored listing per whitelist
Jan 27 12:49:09.273: INFO: namespace e2e-tests-emptydir-h4zgs deletion completed in 6.334306143s

• [SLOW TEST:17.364 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:49:09.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6946183d-4103-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:49:09.530: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-fz56b" to be "success or failure"
Jan 27 12:49:09.555: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.399908ms
Jan 27 12:49:11.572: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041518774s
Jan 27 12:49:13.598: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067504646s
Jan 27 12:49:15.624: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093579208s
Jan 27 12:49:17.774: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243980784s
Jan 27 12:49:19.787: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.256489382s
STEP: Saw pod success
Jan 27 12:49:19.787: INFO: Pod "pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:49:19.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 12:49:19.871: INFO: Waiting for pod pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:49:19.890: INFO: Pod pod-projected-secrets-6946d092-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:49:19.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fz56b" for this suite.
Jan 27 12:49:26.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:49:26.137: INFO: namespace: e2e-tests-projected-fz56b, resource: bindings, ignored listing per whitelist
Jan 27 12:49:26.165: INFO: namespace e2e-tests-projected-fz56b deletion completed in 6.189458915s

• [SLOW TEST:16.891 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:49:26.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-73561715-4103-11ea-b652-0242ac110005
STEP: Creating secret with name s-test-opt-upd-73561782-4103-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-73561715-4103-11ea-b652-0242ac110005
STEP: Updating secret s-test-opt-upd-73561782-4103-11ea-b652-0242ac110005
STEP: Creating secret with name s-test-opt-create-735617a7-4103-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:49:42.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ls4zl" for this suite.
Jan 27 12:50:08.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:50:08.967: INFO: namespace: e2e-tests-projected-ls4zl, resource: bindings, ignored listing per whitelist
Jan 27 12:50:09.070: INFO: namespace e2e-tests-projected-ls4zl deletion completed in 26.265994736s

• [SLOW TEST:42.904 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:50:09.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 27 12:50:19.523: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan 27 12:51:51.764: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-jvw5b".
STEP: Found 0 events.
Jan 27 12:51:51.906: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 27 12:51:51.906: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:50:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:50:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:50:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:50:19 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 27 12:51:51.906: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:11:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 27 12:51:51.907: INFO: 
Jan 27 12:51:51.957: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 27 12:51:52.211: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:19638519,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-27 12:51:47 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-27 12:51:47 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-27 12:51:47 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-27 12:51:47 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 27 12:51:52.212: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 27 12:51:52.380: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 27 12:51:52.423: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 27 12:51:52.423: INFO: 	Container coredns ready: true, restart count 0
Jan 27 12:51:52.423: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 27 12:51:52.423: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 27 12:51:52.423: INFO: test-pod-uninitialized started at 2020-01-27 12:50:19 +0000 UTC (0+1 container statuses recorded)
Jan 27 12:51:52.423: INFO: 	Container nginx ready: true, restart count 0
Jan 27 12:51:52.423: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 27 12:51:52.423: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 27 12:51:52.424: INFO: 	Container weave ready: true, restart count 0
Jan 27 12:51:52.424: INFO: 	Container weave-npc ready: true, restart count 0
Jan 27 12:51:52.424: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 27 12:51:52.424: INFO: 	Container coredns ready: true, restart count 0
Jan 27 12:51:52.424: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 27 12:51:52.424: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 27 12:51:52.424: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0127 12:51:52.491451       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 27 12:51:52.622: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 27 12:51:52.622: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m6.976279s}
Jan 27 12:51:52.622: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:33.253314s}
Jan 27 12:51:52.622: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:19.361726s}
Jan 27 12:51:52.622: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:19.343112s}
Jan 27 12:51:52.622: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:19.343112s}
Jan 27 12:51:52.622: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:16.912385s}
Jan 27 12:51:52.622: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.674028s}
Jan 27 12:51:52.622: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.023034s}
Jan 27 12:51:52.622: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:10.233474s}
Jan 27 12:51:52.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-jvw5b" for this suite.
Jan 27 12:51:58.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:51:58.800: INFO: namespace: e2e-tests-namespaces-jvw5b, resource: bindings, ignored listing per whitelist
Jan 27 12:51:58.836: INFO: namespace e2e-tests-namespaces-jvw5b deletion completed in 6.203652007s
STEP: Destroying namespace "e2e-tests-nsdeletetest-x5vfp" for this suite.
Jan 27 12:51:58.843: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-x5vfp": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-x5vfp": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-x5vfp\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc001920240), Code:409}})

• Failure [109.774 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000a18b0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:51:58.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 27 12:51:59.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:51:59.333: INFO: stderr: ""
Jan 27 12:51:59.333: INFO: stdout: "pod/pause created\n"
Jan 27 12:51:59.334: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 27 12:51:59.334: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-xplzf" to be "running and ready"
Jan 27 12:51:59.345: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.8431ms
Jan 27 12:52:01.376: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041887366s
Jan 27 12:52:03.388: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053933799s
Jan 27 12:52:05.502: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168455697s
Jan 27 12:52:07.514: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.180675342s
Jan 27 12:52:07.514: INFO: Pod "pause" satisfied condition "running and ready"
Jan 27 12:52:07.514: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 27 12:52:07.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:07.706: INFO: stderr: ""
Jan 27 12:52:07.706: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 27 12:52:07.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:07.848: INFO: stderr: ""
Jan 27 12:52:07.849: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 27 12:52:07.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:07.984: INFO: stderr: ""
Jan 27 12:52:07.984: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 27 12:52:07.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:08.127: INFO: stderr: ""
Jan 27 12:52:08.127: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 27 12:52:08.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:08.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 12:52:08.327: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 27 12:52:08.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-xplzf'
Jan 27 12:52:08.501: INFO: stderr: "No resources found.\n"
Jan 27 12:52:08.501: INFO: stdout: ""
Jan 27 12:52:08.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-xplzf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 12:52:08.633: INFO: stderr: ""
Jan 27 12:52:08.634: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:52:08.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xplzf" for this suite.
Jan 27 12:52:14.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:52:15.016: INFO: namespace: e2e-tests-kubectl-xplzf, resource: bindings, ignored listing per whitelist
Jan 27 12:52:15.050: INFO: namespace e2e-tests-kubectl-xplzf deletion completed in 6.371292346s

• [SLOW TEST:16.204 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:52:15.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 12:52:15.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-pwtpl'
Jan 27 12:52:15.413: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 12:52:15.413: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 27 12:52:15.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-pwtpl'
Jan 27 12:52:15.596: INFO: stderr: ""
Jan 27 12:52:15.596: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:52:15.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pwtpl" for this suite.
Jan 27 12:52:23.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:52:23.902: INFO: namespace: e2e-tests-kubectl-pwtpl, resource: bindings, ignored listing per whitelist
Jan 27 12:52:23.969: INFO: namespace e2e-tests-kubectl-pwtpl deletion completed in 8.28387637s

• [SLOW TEST:8.919 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:52:23.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-mpt4m/secret-test-dd529fc1-4103-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:52:24.305: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-mpt4m" to be "success or failure"
Jan 27 12:52:24.321: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.052778ms
Jan 27 12:52:26.337: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031964143s
Jan 27 12:52:28.367: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061952744s
Jan 27 12:52:30.378: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073191563s
Jan 27 12:52:32.385: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.080477333s
Jan 27 12:52:34.397: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09176692s
STEP: Saw pod success
Jan 27 12:52:34.397: INFO: Pod "pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:52:34.401: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005 container env-test: 
STEP: delete the pod
Jan 27 12:52:34.952: INFO: Waiting for pod pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:52:34.963: INFO: Pod pod-configmaps-dd53b69d-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:52:34.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mpt4m" for this suite.
Jan 27 12:52:41.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:52:41.666: INFO: namespace: e2e-tests-secrets-mpt4m, resource: bindings, ignored listing per whitelist
Jan 27 12:52:41.686: INFO: namespace e2e-tests-secrets-mpt4m deletion completed in 6.69686057s

• [SLOW TEST:17.716 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:52:41.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e7d9abc0-4103-11ea-b652-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 27 12:52:41.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005" in namespace "e2e-tests-configmap-hfvxs" to be "success or failure"
Jan 27 12:52:42.095: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 168.284564ms
Jan 27 12:52:44.108: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180492545s
Jan 27 12:52:46.125: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198247046s
Jan 27 12:52:48.255: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327551728s
Jan 27 12:52:50.265: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.338243184s
STEP: Saw pod success
Jan 27 12:52:50.265: INFO: Pod "pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:52:50.268: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 27 12:52:50.428: INFO: Waiting for pod pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005 to disappear
Jan 27 12:52:50.441: INFO: Pod pod-configmaps-e7deb5ee-4103-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:52:50.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hfvxs" for this suite.
Jan 27 12:52:56.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:52:56.665: INFO: namespace: e2e-tests-configmap-hfvxs, resource: bindings, ignored listing per whitelist
Jan 27 12:52:56.671: INFO: namespace e2e-tests-configmap-hfvxs deletion completed in 6.220411184s

• [SLOW TEST:14.985 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:52:56.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-c86kq
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 27 12:52:56.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 27 12:53:35.034: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-c86kq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 27 12:53:35.035: INFO: >>> kubeConfig: /root/.kube/config
I0127 12:53:35.104924       8 log.go:172] (0xc0016f8370) (0xc0028765a0) Create stream
I0127 12:53:35.104979       8 log.go:172] (0xc0016f8370) (0xc0028765a0) Stream added, broadcasting: 1
I0127 12:53:35.133962       8 log.go:172] (0xc0016f8370) Reply frame received for 1
I0127 12:53:35.134044       8 log.go:172] (0xc0016f8370) (0xc001e0c000) Create stream
I0127 12:53:35.134061       8 log.go:172] (0xc0016f8370) (0xc001e0c000) Stream added, broadcasting: 3
I0127 12:53:35.138508       8 log.go:172] (0xc0016f8370) Reply frame received for 3
I0127 12:53:35.138539       8 log.go:172] (0xc0016f8370) (0xc002876640) Create stream
I0127 12:53:35.138569       8 log.go:172] (0xc0016f8370) (0xc002876640) Stream added, broadcasting: 5
I0127 12:53:35.142085       8 log.go:172] (0xc0016f8370) Reply frame received for 5
I0127 12:53:36.351455       8 log.go:172] (0xc0016f8370) Data frame received for 3
I0127 12:53:36.351514       8 log.go:172] (0xc001e0c000) (3) Data frame handling
I0127 12:53:36.351539       8 log.go:172] (0xc001e0c000) (3) Data frame sent
I0127 12:53:36.573313       8 log.go:172] (0xc0016f8370) Data frame received for 1
I0127 12:53:36.573408       8 log.go:172] (0xc0016f8370) (0xc002876640) Stream removed, broadcasting: 5
I0127 12:53:36.573475       8 log.go:172] (0xc0028765a0) (1) Data frame handling
I0127 12:53:36.573512       8 log.go:172] (0xc0028765a0) (1) Data frame sent
I0127 12:53:36.573549       8 log.go:172] (0xc0016f8370) (0xc001e0c000) Stream removed, broadcasting: 3
I0127 12:53:36.573595       8 log.go:172] (0xc0016f8370) (0xc0028765a0) Stream removed, broadcasting: 1
I0127 12:53:36.573609       8 log.go:172] (0xc0016f8370) Go away received
I0127 12:53:36.574068       8 log.go:172] (0xc0016f8370) (0xc0028765a0) Stream removed, broadcasting: 1
I0127 12:53:36.574091       8 log.go:172] (0xc0016f8370) (0xc001e0c000) Stream removed, broadcasting: 3
I0127 12:53:36.574101       8 log.go:172] (0xc0016f8370) (0xc002876640) Stream removed, broadcasting: 5
Jan 27 12:53:36.574: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:53:36.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-c86kq" for this suite.
Jan 27 12:54:00.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:54:00.750: INFO: namespace: e2e-tests-pod-network-test-c86kq, resource: bindings, ignored listing per whitelist
Jan 27 12:54:00.853: INFO: namespace e2e-tests-pod-network-test-c86kq deletion completed in 24.246549811s

• [SLOW TEST:64.182 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:54:00.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 27 12:54:01.124: INFO: Waiting up to 5m0s for pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-brvkh" to be "success or failure"
Jan 27 12:54:01.137: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.581107ms
Jan 27 12:54:03.150: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025720942s
Jan 27 12:54:05.182: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058675038s
Jan 27 12:54:07.218: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094667127s
Jan 27 12:54:09.533: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409399402s
Jan 27 12:54:11.550: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426094922s
STEP: Saw pod success
Jan 27 12:54:11.550: INFO: Pod "downward-api-1704e8bf-4104-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:54:11.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1704e8bf-4104-11ea-b652-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 27 12:54:12.451: INFO: Waiting for pod downward-api-1704e8bf-4104-11ea-b652-0242ac110005 to disappear
Jan 27 12:54:12.805: INFO: Pod downward-api-1704e8bf-4104-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:54:12.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-brvkh" for this suite.
Jan 27 12:54:19.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:54:19.471: INFO: namespace: e2e-tests-downward-api-brvkh, resource: bindings, ignored listing per whitelist
Jan 27 12:54:20.277: INFO: namespace e2e-tests-downward-api-brvkh deletion completed in 7.435493438s

• [SLOW TEST:19.424 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:54:20.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 27 12:54:20.705: INFO: Waiting up to 5m0s for pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-2ccc6" to be "success or failure"
Jan 27 12:54:20.722: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.129919ms
Jan 27 12:54:22.744: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039002419s
Jan 27 12:54:24.766: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061149093s
Jan 27 12:54:26.836: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131080366s
Jan 27 12:54:28.971: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265798037s
Jan 27 12:54:30.991: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.285587188s
STEP: Saw pod success
Jan 27 12:54:30.991: INFO: Pod "pod-22bf6f5c-4104-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:54:31.004: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-22bf6f5c-4104-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 12:54:31.694: INFO: Waiting for pod pod-22bf6f5c-4104-11ea-b652-0242ac110005 to disappear
Jan 27 12:54:31.935: INFO: Pod pod-22bf6f5c-4104-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:54:31.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2ccc6" for this suite.
Jan 27 12:54:40.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:54:40.081: INFO: namespace: e2e-tests-emptydir-2ccc6, resource: bindings, ignored listing per whitelist
Jan 27 12:54:40.519: INFO: namespace e2e-tests-emptydir-2ccc6 deletion completed in 8.571005889s

• [SLOW TEST:20.242 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:54:40.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 27 12:54:40.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jl22r'
Jan 27 12:54:41.475: INFO: stderr: ""
Jan 27 12:54:41.475: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 27 12:54:43.096: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:43.096: INFO: Found 0 / 1
Jan 27 12:54:43.495: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:43.495: INFO: Found 0 / 1
Jan 27 12:54:44.520: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:44.520: INFO: Found 0 / 1
Jan 27 12:54:46.904: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:46.904: INFO: Found 0 / 1
Jan 27 12:54:47.510: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:47.511: INFO: Found 0 / 1
Jan 27 12:54:48.498: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:48.498: INFO: Found 0 / 1
Jan 27 12:54:49.516: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:49.516: INFO: Found 0 / 1
Jan 27 12:54:50.506: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:50.506: INFO: Found 0 / 1
Jan 27 12:54:54.511: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:54.512: INFO: Found 0 / 1
Jan 27 12:54:56.071: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:56.071: INFO: Found 0 / 1
Jan 27 12:54:56.492: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:56.492: INFO: Found 0 / 1
Jan 27 12:54:57.642: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:57.642: INFO: Found 0 / 1
Jan 27 12:54:59.465: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:59.465: INFO: Found 0 / 1
Jan 27 12:54:59.814: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:54:59.814: INFO: Found 0 / 1
Jan 27 12:55:00.500: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:55:00.500: INFO: Found 0 / 1
Jan 27 12:55:01.512: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:55:01.512: INFO: Found 0 / 1
Jan 27 12:55:02.504: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:55:02.504: INFO: Found 1 / 1
Jan 27 12:55:02.504: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 27 12:55:02.516: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:55:02.517: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 27 12:55:02.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5jl47 --namespace=e2e-tests-kubectl-jl22r -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 27 12:55:02.711: INFO: stderr: ""
Jan 27 12:55:02.711: INFO: stdout: "pod/redis-master-5jl47 patched\n"
STEP: checking annotations
Jan 27 12:55:02.829: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 12:55:02.829: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:55:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jl22r" for this suite.
Jan 27 12:55:28.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:55:29.021: INFO: namespace: e2e-tests-kubectl-jl22r, resource: bindings, ignored listing per whitelist
Jan 27 12:55:29.035: INFO: namespace e2e-tests-kubectl-jl22r deletion completed in 26.182833518s

• [SLOW TEST:48.516 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:55:29.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-g5mml
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g5mml to expose endpoints map[]
Jan 27 12:55:29.410: INFO: Get endpoints failed (100.322414ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 27 12:55:30.420: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g5mml exposes endpoints map[] (1.109969218s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-g5mml
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g5mml to expose endpoints map[pod1:[100]]
Jan 27 12:55:34.741: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.278778379s elapsed, will retry)
Jan 27 12:55:40.335: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g5mml exposes endpoints map[pod1:[100]] (9.87209115s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-g5mml
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g5mml to expose endpoints map[pod1:[100] pod2:[101]]
Jan 27 12:55:46.091: INFO: Unexpected endpoints: found map[4c525992-4104-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.739611609s elapsed, will retry)
Jan 27 12:55:51.033: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g5mml exposes endpoints map[pod1:[100] pod2:[101]] (10.681330694s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-g5mml
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g5mml to expose endpoints map[pod2:[101]]
Jan 27 12:55:52.362: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g5mml exposes endpoints map[pod2:[101]] (1.311291739s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-g5mml
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g5mml to expose endpoints map[]
Jan 27 12:55:53.639: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g5mml exposes endpoints map[] (1.058802129s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:55:54.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-g5mml" for this suite.
Jan 27 12:56:19.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:56:19.142: INFO: namespace: e2e-tests-services-g5mml, resource: bindings, ignored listing per whitelist
Jan 27 12:56:19.315: INFO: namespace e2e-tests-services-g5mml deletion completed in 24.358204299s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.280 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:56:19.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-69a6207a-4104-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 12:56:19.785: INFO: Waiting up to 5m0s for pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-7qt77" to be "success or failure"
Jan 27 12:56:19.805: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.612218ms
Jan 27 12:56:22.417: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632248101s
Jan 27 12:56:24.437: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.651751561s
Jan 27 12:56:26.452: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666393305s
Jan 27 12:56:28.607: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822047373s
Jan 27 12:56:30.640: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854847346s
Jan 27 12:56:32.677: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.89198962s
Jan 27 12:56:35.806: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.021195994s
STEP: Saw pod success
Jan 27 12:56:35.807: INFO: Pod "pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 12:56:36.254: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 27 12:56:36.701: INFO: Waiting for pod pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005 to disappear
Jan 27 12:56:36.744: INFO: Pod pod-secrets-69b9b74b-4104-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:56:36.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7qt77" for this suite.
Jan 27 12:56:42.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:56:43.074: INFO: namespace: e2e-tests-secrets-7qt77, resource: bindings, ignored listing per whitelist
Jan 27 12:56:43.144: INFO: namespace e2e-tests-secrets-7qt77 deletion completed in 6.307526496s

• [SLOW TEST:23.829 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:56:43.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 27 12:56:56.157: INFO: Successfully updated pod "labelsupdate77cc0571-4104-11ea-b652-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:56:58.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x44jt" for this suite.
Jan 27 12:57:24.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:57:24.681: INFO: namespace: e2e-tests-downward-api-x44jt, resource: bindings, ignored listing per whitelist
Jan 27 12:57:24.690: INFO: namespace e2e-tests-downward-api-x44jt deletion completed in 26.307771303s

• [SLOW TEST:41.545 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:57:24.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 12:57:25.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-677n2'
Jan 27 12:57:27.941: INFO: stderr: ""
Jan 27 12:57:27.941: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 27 12:57:37.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-677n2 -o json'
Jan 27 12:57:38.160: INFO: stderr: ""
Jan 27 12:57:38.160: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-27T12:57:27Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-677n2\",\n        \"resourceVersion\": \"19639283\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-677n2/pods/e2e-test-nginx-pod\",\n        \"uid\": \"925773d7-4104-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-89l9v\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-89l9v\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-89l9v\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-27T12:57:28Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-27T12:57:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-27T12:57:37Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-27T12:57:27Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://94a309ca3683dba56d419d4ac17ea99b71fba2362070add527d7a5ac7729494f\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-27T12:57:36Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-27T12:57:28Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 27 12:57:38.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-677n2'
Jan 27 12:57:38.507: INFO: stderr: ""
Jan 27 12:57:38.507: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 27 12:57:38.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-677n2'
Jan 27 12:57:48.684: INFO: stderr: ""
Jan 27 12:57:48.684: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:57:48.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-677n2" for this suite.
Jan 27 12:57:56.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:57:56.925: INFO: namespace: e2e-tests-kubectl-677n2, resource: bindings, ignored listing per whitelist
Jan 27 12:57:57.054: INFO: namespace e2e-tests-kubectl-677n2 deletion completed in 8.35962763s

• [SLOW TEST:32.363 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:57:57.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 27 12:57:57.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:57:57.498: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 27 12:57:57.498: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 27 12:57:57.511: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 27 12:57:57.542: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 27 12:57:57.631: INFO: scanned /root for discovery docs: 
Jan 27 12:57:57.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:58:24.276: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 27 12:58:24.276: INFO: stdout: "Created e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed\nScaling up e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 27 12:58:24.276: INFO: stdout: "Created e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed\nScaling up e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 27 12:58:24.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:58:24.465: INFO: stderr: ""
Jan 27 12:58:24.466: INFO: stdout: "e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed-22jbg "
Jan 27 12:58:24.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed-22jbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:58:24.693: INFO: stderr: ""
Jan 27 12:58:24.693: INFO: stdout: "true"
Jan 27 12:58:24.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed-22jbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:58:24.792: INFO: stderr: ""
Jan 27 12:58:24.792: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 27 12:58:24.792: INFO: e2e-test-nginx-rc-83e7ea01d0fee7b777e290dc617028ed-22jbg is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 27 12:58:24.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gqh4r'
Jan 27 12:58:24.910: INFO: stderr: ""
Jan 27 12:58:24.910: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:58:24.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gqh4r" for this suite.
Jan 27 12:58:49.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 12:58:49.198: INFO: namespace: e2e-tests-kubectl-gqh4r, resource: bindings, ignored listing per whitelist
Jan 27 12:58:49.282: INFO: namespace e2e-tests-kubectl-gqh4r deletion completed in 24.234636828s

• [SLOW TEST:52.228 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 12:58:49.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 12:58:49.869: INFO: Creating deployment "nginx-deployment"
Jan 27 12:58:49.900: INFO: Waiting for observed generation 1
Jan 27 12:58:53.477: INFO: Waiting for all required pods to come up
Jan 27 12:58:54.877: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 27 12:59:34.448: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 27 12:59:34.463: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 27 12:59:34.497: INFO: Updating deployment nginx-deployment
Jan 27 12:59:34.497: INFO: Waiting for observed generation 2
Jan 27 12:59:37.733: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 27 12:59:38.540: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 27 12:59:39.067: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 27 12:59:42.077: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 27 12:59:42.078: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 27 12:59:42.096: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 27 12:59:42.750: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 27 12:59:42.750: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 27 12:59:43.289: INFO: Updating deployment nginx-deployment
Jan 27 12:59:43.289: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 27 12:59:43.299: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 27 12:59:45.542: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 27 12:59:45.841: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nbgfl/deployments/nginx-deployment,UID:c332952a-4104-11ea-a994-fa163e34d433,ResourceVersion:19639755,Generation:3,CreationTimestamp:2020-01-27 12:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-27 12:59:43 +0000 UTC 2020-01-27 12:59:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-27 12:59:45 +0000 UTC 2020-01-27 12:58:49 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 27 12:59:47.057: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nbgfl/replicasets/nginx-deployment-5c98f8fb5,UID:ddd6d5fc-4104-11ea-a994-fa163e34d433,ResourceVersion:19639754,Generation:3,CreationTimestamp:2020-01-27 12:59:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c332952a-4104-11ea-a994-fa163e34d433 0xc001ffb2f7 0xc001ffb2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 27 12:59:47.057: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 27 12:59:47.058: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nbgfl/replicasets/nginx-deployment-85ddf47c5d,UID:c338d77a-4104-11ea-a994-fa163e34d433,ResourceVersion:19639760,Generation:3,CreationTimestamp:2020-01-27 12:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c332952a-4104-11ea-a994-fa163e34d433 0xc001ffb3b7 0xc001ffb3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 27 12:59:47.778: INFO: Pod "nginx-deployment-5c98f8fb5-5q74b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5q74b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-5q74b,UID:e3395dcc-4104-11ea-a994-fa163e34d433,ResourceVersion:19639715,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dec097 0xc001dec098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dec2b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dec340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.779: INFO: Pod "nginx-deployment-5c98f8fb5-6lt5m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6lt5m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-6lt5m,UID:dde2d8f6-4104-11ea-a994-fa163e34d433,ResourceVersion:19639681,Generation:0,CreationTimestamp:2020-01-27 12:59:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dec3b7 0xc001dec3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dec420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dec440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.779: INFO: Pod "nginx-deployment-5c98f8fb5-7f2hc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7f2hc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-7f2hc,UID:e35917af-4104-11ea-a994-fa163e34d433,ResourceVersion:19639744,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dec657 0xc001dec658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dec6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dec6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.779: INFO: Pod "nginx-deployment-5c98f8fb5-89c89" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-89c89,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-89c89,UID:e3592624-4104-11ea-a994-fa163e34d433,ResourceVersion:19639747,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dec877 0xc001dec878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dec8e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dec900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.779: INFO: Pod "nginx-deployment-5c98f8fb5-ghr6b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ghr6b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-ghr6b,UID:de30c4be-4104-11ea-a994-fa163e34d433,ResourceVersion:19639691,Generation:0,CreationTimestamp:2020-01-27 12:59:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dec977 0xc001dec978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001deca50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001deca70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.780: INFO: Pod "nginx-deployment-5c98f8fb5-mcbvj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mcbvj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-mcbvj,UID:e37d0b6b-4104-11ea-a994-fa163e34d433,ResourceVersion:19639751,Generation:0,CreationTimestamp:2020-01-27 12:59:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001decb37 0xc001decb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ded0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ded190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.780: INFO: Pod "nginx-deployment-5c98f8fb5-mzftb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mzftb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-mzftb,UID:e358bfe9-4104-11ea-a994-fa163e34d433,ResourceVersion:19639746,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001ded857 0xc001ded858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ded8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dedb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.780: INFO: Pod "nginx-deployment-5c98f8fb5-pncgh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pncgh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-pncgh,UID:dddd174a-4104-11ea-a994-fa163e34d433,ResourceVersion:19639677,Generation:0,CreationTimestamp:2020-01-27 12:59:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dedd37 0xc001dedd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dedda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001deddc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.780: INFO: Pod "nginx-deployment-5c98f8fb5-qgds9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qgds9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-qgds9,UID:de4ca533-4104-11ea-a994-fa163e34d433,ResourceVersion:19639693,Generation:0,CreationTimestamp:2020-01-27 12:59:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001dedf37 0xc001dedf38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dedfa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dedfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.781: INFO: Pod "nginx-deployment-5c98f8fb5-srpn8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-srpn8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-srpn8,UID:e358fc98-4104-11ea-a994-fa163e34d433,ResourceVersion:19639745,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001e360e7 0xc001e360e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e36150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e36170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.781: INFO: Pod "nginx-deployment-5c98f8fb5-vbcn9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vbcn9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-vbcn9,UID:e334f39f-4104-11ea-a994-fa163e34d433,ResourceVersion:19639738,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001e36557 0xc001e36558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e365c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e365e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.781: INFO: Pod "nginx-deployment-5c98f8fb5-xkgqc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xkgqc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-xkgqc,UID:e3390c8f-4104-11ea-a994-fa163e34d433,ResourceVersion:19639718,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001e36967 0xc001e36968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e369d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e369f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.781: INFO: Pod "nginx-deployment-5c98f8fb5-zckxn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zckxn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-5c98f8fb5-zckxn,UID:dde2e84d-4104-11ea-a994-fa163e34d433,ResourceVersion:19639686,Generation:0,CreationTimestamp:2020-01-27 12:59:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddd6d5fc-4104-11ea-a994-fa163e34d433 0xc001e36c17 0xc001e36c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e36c80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e36ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.781: INFO: Pod "nginx-deployment-85ddf47c5d-2llkl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2llkl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-2llkl,UID:e339efe4-4104-11ea-a994-fa163e34d433,ResourceVersion:19639717,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e36d67 0xc001e36d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e36e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e36e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.782: INFO: Pod "nginx-deployment-85ddf47c5d-5n4nr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5n4nr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-5n4nr,UID:e352af4a-4104-11ea-a994-fa163e34d433,ResourceVersion:19639724,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e36f07 0xc001e36f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e36f70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e36f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.782: INFO: Pod "nginx-deployment-85ddf47c5d-67rlm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-67rlm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-67rlm,UID:c39438bc-4104-11ea-a994-fa163e34d433,ResourceVersion:19639611,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37007 0xc001e37008}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-27 12:58:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b53ab2d3b25474a4ba6df8566c15abe771a4cc53a4d970e9c75df4067a36af20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.782: INFO: Pod "nginx-deployment-85ddf47c5d-8brzn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8brzn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-8brzn,UID:e3526213-4104-11ea-a994-fa163e34d433,ResourceVersion:19639725,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37247 0xc001e37248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e372b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e372d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.782: INFO: Pod "nginx-deployment-85ddf47c5d-chdjk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-chdjk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-chdjk,UID:e3671fe1-4104-11ea-a994-fa163e34d433,ResourceVersion:19639740,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e373f7 0xc001e373f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.782: INFO: Pod "nginx-deployment-85ddf47c5d-djftk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-djftk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-djftk,UID:e367c7bd-4104-11ea-a994-fa163e34d433,ResourceVersion:19639741,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e374f7 0xc001e374f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e375d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e375f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.783: INFO: Pod "nginx-deployment-85ddf47c5d-fp7cp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fp7cp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-fp7cp,UID:c3947ba8-4104-11ea-a994-fa163e34d433,ResourceVersion:19639621,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37667 0xc001e37668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e377b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e377d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-27 12:58:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3fa04b6274049157911375afcf9eecd6848492c48416b0e4b0c31e7a97760c7d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.783: INFO: Pod "nginx-deployment-85ddf47c5d-fsdb7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fsdb7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-fsdb7,UID:e3672782-4104-11ea-a994-fa163e34d433,ResourceVersion:19639743,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37897 0xc001e37898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e379a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.783: INFO: Pod "nginx-deployment-85ddf47c5d-g2jqs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g2jqs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-g2jqs,UID:c3817565-4104-11ea-a994-fa163e34d433,ResourceVersion:19639602,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37a17 0xc001e37a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-27 12:58:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d58c6615ee77e3c208064dc163ac2ab7b9c52259842967c08793e7e226c5f67d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.784: INFO: Pod "nginx-deployment-85ddf47c5d-g455w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g455w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-g455w,UID:e353f225-4104-11ea-a994-fa163e34d433,ResourceVersion:19639733,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37c67 0xc001e37c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.784: INFO: Pod "nginx-deployment-85ddf47c5d-h874g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h874g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-h874g,UID:e353a522-4104-11ea-a994-fa163e34d433,ResourceVersion:19639732,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37da7 0xc001e37da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.784: INFO: Pod "nginx-deployment-85ddf47c5d-hgvtf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hgvtf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-hgvtf,UID:c36aec7a-4104-11ea-a994-fa163e34d433,ResourceVersion:19639593,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001e37ef7 0xc001e37ef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e37f60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e37f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-27 12:58:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8d2a57768c280bfd602e3493dcbdd85a76974ba02d7004a827b9d17d6d378e6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.784: INFO: Pod "nginx-deployment-85ddf47c5d-kpzzk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpzzk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-kpzzk,UID:c36df8de-4104-11ea-a994-fa163e34d433,ResourceVersion:19639596,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74047 0xc001d74048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d74180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-27 12:58:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://00d4e45ec95599720c649b819269ac4dc40b024462657e20e0f49d7ac5b7ab27}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.784: INFO: Pod "nginx-deployment-85ddf47c5d-nj85k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nj85k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-nj85k,UID:e367ef05-4104-11ea-a994-fa163e34d433,ResourceVersion:19639742,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74247 0xc001d74248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d742b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d742d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.785: INFO: Pod "nginx-deployment-85ddf47c5d-nkbgw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nkbgw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-nkbgw,UID:c3817c22-4104-11ea-a994-fa163e34d433,ResourceVersion:19639599,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d743c7 0xc001d743c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d74450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-27 12:58:51 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ef5483a432ea994c31abfc8ae52a210664c0b70614a0166255a7af18b39b7398}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.785: INFO: Pod "nginx-deployment-85ddf47c5d-p57ln" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p57ln,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-p57ln,UID:c381ff6d-4104-11ea-a994-fa163e34d433,ResourceVersion:19639625,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74527 0xc001d74528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d745b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-27 12:58:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8881644d59be19f3a77f61f77fe25de6ecd896664ba47478142e0a1457ae14fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.785: INFO: Pod "nginx-deployment-85ddf47c5d-qmb6r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qmb6r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-qmb6r,UID:c381f4e2-4104-11ea-a994-fa163e34d433,ResourceVersion:19639617,Generation:0,CreationTimestamp:2020-01-27 12:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74717 0xc001d74718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d747a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:58:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-27 12:58:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-27 12:59:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://afa6e3deb367eae9e249fa7e315901e8ed9f27be3563d16fb0e97f7d3b36fb6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.785: INFO: Pod "nginx-deployment-85ddf47c5d-stsph" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-stsph,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-stsph,UID:e3677b11-4104-11ea-a994-fa163e34d433,ResourceVersion:19639739,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74867 0xc001d74868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d74970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.786: INFO: Pod "nginx-deployment-85ddf47c5d-thwh9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-thwh9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-thwh9,UID:e33a37fa-4104-11ea-a994-fa163e34d433,ResourceVersion:19639765,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74aa7 0xc001d74aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d74b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 27 12:59:47.786: INFO: Pod "nginx-deployment-85ddf47c5d-z8phh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z8phh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nbgfl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nbgfl/pods/nginx-deployment-85ddf47c5d-z8phh,UID:e334c679-4104-11ea-a994-fa163e34d433,ResourceVersion:19639758,Generation:0,CreationTimestamp:2020-01-27 12:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c338d77a-4104-11ea-a994-fa163e34d433 0xc001d74be7 0xc001d74be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l528s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l528s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-l528s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d74c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d74c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 12:59:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-27 12:59:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 12:59:47.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nbgfl" for this suite.
Jan 27 13:00:38.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:00:38.731: INFO: namespace: e2e-tests-deployment-nbgfl, resource: bindings, ignored listing per whitelist
Jan 27 13:00:38.747: INFO: namespace e2e-tests-deployment-nbgfl deletion completed in 50.423757426s

• [SLOW TEST:109.464 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:00:38.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-053bf815-4105-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 13:00:41.412: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-gpxtr" to be "success or failure"
Jan 27 13:00:42.298: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 886.70797ms
Jan 27 13:00:44.313: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.901396138s
Jan 27 13:00:46.522: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.110657699s
Jan 27 13:00:48.543: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.131476617s
Jan 27 13:00:50.583: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.170970083s
Jan 27 13:00:52.647: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.235327727s
Jan 27 13:00:54.663: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.251587221s
Jan 27 13:00:57.122: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.710350231s
Jan 27 13:00:59.208: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.795922559s
Jan 27 13:01:01.277: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.865625155s
Jan 27 13:01:03.288: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.876061111s
STEP: Saw pod success
Jan 27 13:01:03.288: INFO: Pod "pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:01:03.290: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 13:01:04.375: INFO: Waiting for pod pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005 to disappear
Jan 27 13:01:04.396: INFO: Pod pod-projected-secrets-05a19bd2-4105-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:01:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gpxtr" for this suite.
Jan 27 13:01:10.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:01:10.742: INFO: namespace: e2e-tests-projected-gpxtr, resource: bindings, ignored listing per whitelist
Jan 27 13:01:10.807: INFO: namespace e2e-tests-projected-gpxtr deletion completed in 6.383998667s

• [SLOW TEST:32.060 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:01:10.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q2px7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q2px7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 27 13:01:27.216: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.220: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.226: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.231: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.237: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.243: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.251: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.257: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.266: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.271: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-1750a83b-4105-11ea-b652-0242ac110005)
Jan 27 13:01:27.271: INFO: Lookups using e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-q2px7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 27 13:01:32.405: INFO: DNS probes using e2e-tests-dns-q2px7/dns-test-1750a83b-4105-11ea-b652-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:01:32.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-q2px7" for this suite.
Jan 27 13:01:40.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:01:40.771: INFO: namespace: e2e-tests-dns-q2px7, resource: bindings, ignored listing per whitelist
Jan 27 13:01:41.036: INFO: namespace e2e-tests-dns-q2px7 deletion completed in 8.413564618s

• [SLOW TEST:30.229 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:01:41.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 27 13:01:41.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8x95g'
Jan 27 13:01:41.595: INFO: stderr: ""
Jan 27 13:01:41.595: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 27 13:01:42.839: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:42.839: INFO: Found 0 / 1
Jan 27 13:01:43.629: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:43.629: INFO: Found 0 / 1
Jan 27 13:01:44.722: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:44.722: INFO: Found 0 / 1
Jan 27 13:01:45.612: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:45.612: INFO: Found 0 / 1
Jan 27 13:01:46.630: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:46.631: INFO: Found 0 / 1
Jan 27 13:01:47.747: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:47.747: INFO: Found 0 / 1
Jan 27 13:01:48.627: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:48.627: INFO: Found 0 / 1
Jan 27 13:01:49.603: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:49.603: INFO: Found 1 / 1
Jan 27 13:01:49.603: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 27 13:01:49.609: INFO: Selector matched 1 pods for map[app:redis]
Jan 27 13:01:49.609: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 27 13:01:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g'
Jan 27 13:01:49.813: INFO: stderr: ""
Jan 27 13:01:49.813: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 13:01:49.135 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 13:01:49.135 # Server started, Redis version 3.2.12\n1:M 27 Jan 13:01:49.136 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 13:01:49.136 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 27 13:01:49.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g --tail=1'
Jan 27 13:01:50.003: INFO: stderr: ""
Jan 27 13:01:50.003: INFO: stdout: "1:M 27 Jan 13:01:49.136 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 27 13:01:50.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g --limit-bytes=1'
Jan 27 13:01:50.157: INFO: stderr: ""
Jan 27 13:01:50.157: INFO: stdout: " "
STEP: exposing timestamps
Jan 27 13:01:50.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g --tail=1 --timestamps'
Jan 27 13:01:50.315: INFO: stderr: ""
Jan 27 13:01:50.315: INFO: stdout: "2020-01-27T13:01:49.136543766Z 1:M 27 Jan 13:01:49.136 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 27 13:01:52.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g --since=1s'
Jan 27 13:01:53.028: INFO: stderr: ""
Jan 27 13:01:53.028: INFO: stdout: ""
Jan 27 13:01:53.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gtjpd redis-master --namespace=e2e-tests-kubectl-8x95g --since=24h'
Jan 27 13:01:53.156: INFO: stderr: ""
Jan 27 13:01:53.156: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jan 13:01:49.135 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jan 13:01:49.135 # Server started, Redis version 3.2.12\n1:M 27 Jan 13:01:49.136 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jan 13:01:49.136 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 27 13:01:53.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8x95g'
Jan 27 13:01:53.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 27 13:01:53.456: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 27 13:01:53.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8x95g'
Jan 27 13:01:53.639: INFO: stderr: "No resources found.\n"
Jan 27 13:01:53.639: INFO: stdout: ""
Jan 27 13:01:53.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8x95g -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 27 13:01:53.766: INFO: stderr: ""
Jan 27 13:01:53.766: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:01:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8x95g" for this suite.
Jan 27 13:02:17.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:02:17.962: INFO: namespace: e2e-tests-kubectl-8x95g, resource: bindings, ignored listing per whitelist
Jan 27 13:02:18.018: INFO: namespace e2e-tests-kubectl-8x95g deletion completed in 24.228666233s

• [SLOW TEST:36.981 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:02:18.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-3f67606a-4105-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:02:30.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-72gbx" for this suite.
Jan 27 13:02:54.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:02:54.552: INFO: namespace: e2e-tests-configmap-72gbx, resource: bindings, ignored listing per whitelist
Jan 27 13:02:54.702: INFO: namespace e2e-tests-configmap-72gbx deletion completed in 24.319629824s

• [SLOW TEST:36.684 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:02:54.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 13:02:55.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-8cw67" to be "success or failure"
Jan 27 13:02:55.052: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.080691ms
Jan 27 13:02:57.519: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486666654s
Jan 27 13:02:59.546: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513964885s
Jan 27 13:03:01.828: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795891584s
Jan 27 13:03:03.857: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824869366s
Jan 27 13:03:05.887: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.854783299s
STEP: Saw pod success
Jan 27 13:03:05.887: INFO: Pod "downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:03:05.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 13:03:06.021: INFO: Waiting for pod downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005 to disappear
Jan 27 13:03:06.100: INFO: Pod downwardapi-volume-554bc3b5-4105-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:03:06.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8cw67" for this suite.
Jan 27 13:03:12.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:03:12.616: INFO: namespace: e2e-tests-downward-api-8cw67, resource: bindings, ignored listing per whitelist
Jan 27 13:03:12.658: INFO: namespace e2e-tests-downward-api-8cw67 deletion completed in 6.537575816s

• [SLOW TEST:17.955 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:03:12.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 13:03:12.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-4nfjj" to be "success or failure"
Jan 27 13:03:12.863: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.065584ms
Jan 27 13:03:14.877: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034543145s
Jan 27 13:03:16.900: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057942443s
Jan 27 13:03:18.908: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065600298s
Jan 27 13:03:20.922: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079851778s
STEP: Saw pod success
Jan 27 13:03:20.922: INFO: Pod "downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:03:20.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 13:03:21.118: INFO: Waiting for pod downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005 to disappear
Jan 27 13:03:21.137: INFO: Pod downwardapi-volume-5fee957e-4105-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:03:21.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4nfjj" for this suite.
Jan 27 13:03:27.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:03:27.378: INFO: namespace: e2e-tests-downward-api-4nfjj, resource: bindings, ignored listing per whitelist
Jan 27 13:03:27.490: INFO: namespace e2e-tests-downward-api-4nfjj deletion completed in 6.322959427s

• [SLOW TEST:14.832 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:03:27.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-68c23604-4105-11ea-b652-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-68c23650-4105-11ea-b652-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-68c23604-4105-11ea-b652-0242ac110005
STEP: Updating configmap cm-test-opt-upd-68c23650-4105-11ea-b652-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-68c23669-4105-11ea-b652-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:03:42.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d5rsq" for this suite.
Jan 27 13:04:08.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:04:08.547: INFO: namespace: e2e-tests-configmap-d5rsq, resource: bindings, ignored listing per whitelist
Jan 27 13:04:08.597: INFO: namespace e2e-tests-configmap-d5rsq deletion completed in 26.258633739s

• [SLOW TEST:41.107 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:04:08.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 27 13:04:23.215: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8161e966-4105-11ea-b652-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-k8gj5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-k8gj5/pods/pod-submit-remove-8161e966-4105-11ea-b652-0242ac110005", UID:"81778536-4105-11ea-a994-fa163e34d433", ResourceVersion:"19640482", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715727049, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"954106676"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gshft", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002750440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gshft", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002396b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fc1b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002396bd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002396bf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002396bf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002396bfc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715727049, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715727061, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715727061, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715727049, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002552f60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002552f80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://df230c017286351b161d05cbbfde46c4202f598be7c9a659c6e436cea2454430"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:04:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k8gj5" for this suite.
Jan 27 13:04:40.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:04:40.941: INFO: namespace: e2e-tests-pods-k8gj5, resource: bindings, ignored listing per whitelist
Jan 27 13:04:41.085: INFO: namespace e2e-tests-pods-k8gj5 deletion completed in 8.425172821s

• [SLOW TEST:32.487 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:04:41.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 27 13:04:41.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005" in namespace "e2e-tests-downward-api-zvf9s" to be "success or failure"
Jan 27 13:04:41.441: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.312792ms
Jan 27 13:04:43.870: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453397591s
Jan 27 13:04:45.884: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467343325s
Jan 27 13:04:47.931: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.514558608s
Jan 27 13:04:49.951: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534305601s
Jan 27 13:04:52.312: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89489421s
Jan 27 13:04:54.792: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.375561345s
STEP: Saw pod success
Jan 27 13:04:54.792: INFO: Pod "downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:04:54.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005 container client-container: 
STEP: delete the pod
Jan 27 13:04:55.512: INFO: Waiting for pod downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005 to disappear
Jan 27 13:04:55.531: INFO: Pod downwardapi-volume-94b5ff0c-4105-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:04:55.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zvf9s" for this suite.
Jan 27 13:05:01.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:05:01.717: INFO: namespace: e2e-tests-downward-api-zvf9s, resource: bindings, ignored listing per whitelist
Jan 27 13:05:01.722: INFO: namespace e2e-tests-downward-api-zvf9s deletion completed in 6.186268878s

• [SLOW TEST:20.637 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:05:01.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a1160d33-4105-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 13:05:02.215: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005" in namespace "e2e-tests-projected-n948r" to be "success or failure"
Jan 27 13:05:02.302: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.774932ms
Jan 27 13:05:04.524: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308374778s
Jan 27 13:05:06.591: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375332054s
Jan 27 13:05:08.880: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664783212s
Jan 27 13:05:10.915: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.699575272s
Jan 27 13:05:12.935: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.720076698s
STEP: Saw pod success
Jan 27 13:05:12.936: INFO: Pod "pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:05:12.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 27 13:05:13.040: INFO: Waiting for pod pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005 to disappear
Jan 27 13:05:13.063: INFO: Pod pod-projected-secrets-a118b808-4105-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:05:13.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n948r" for this suite.
Jan 27 13:05:19.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:05:19.337: INFO: namespace: e2e-tests-projected-n948r, resource: bindings, ignored listing per whitelist
Jan 27 13:05:19.488: INFO: namespace e2e-tests-projected-n948r deletion completed in 6.404785542s

• [SLOW TEST:17.765 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:05:19.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 27 13:05:29.754: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ab8af09d-4105-11ea-b652-0242ac110005,GenerateName:,Namespace:e2e-tests-events-r2vn7,SelfLink:/api/v1/namespaces/e2e-tests-events-r2vn7/pods/send-events-ab8af09d-4105-11ea-b652-0242ac110005,UID:ab8bd8c3-4105-11ea-a994-fa163e34d433,ResourceVersion:19640635,Generation:0,CreationTimestamp:2020-01-27 13:05:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 687293149,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bhlzm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bhlzm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bhlzm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002193ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002193b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 13:05:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 13:05:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 13:05:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 13:05:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-27 13:05:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-27 13:05:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://c7c18db2f441db059be02812b278e6919e9642e6e0b1f86ab1cb145477800873}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 27 13:05:31.768: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 27 13:05:33.797: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:05:33.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-r2vn7" for this suite.
Jan 27 13:06:14.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:06:14.040: INFO: namespace: e2e-tests-events-r2vn7, resource: bindings, ignored listing per whitelist
Jan 27 13:06:14.229: INFO: namespace e2e-tests-events-r2vn7 deletion completed in 40.371953024s

• [SLOW TEST:54.740 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:06:14.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wkjxk;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wkjxk;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wkjxk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.159.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.159.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.159.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.159.73_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wkjxk;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wkjxk.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wkjxk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.159.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.159.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.159.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.159.73_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 27 13:06:32.957: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.963: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.971: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wkjxk from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.976: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.982: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.987: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:32.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:33.000: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:33.004: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:33.008: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:33.012: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005: the server could not find the requested resource (get pods dns-test-cc508887-4105-11ea-b652-0242ac110005)
Jan 27 13:06:33.020: INFO: Lookups using e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wkjxk jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk jessie_udp@dns-test-service.e2e-tests-dns-wkjxk.svc jessie_tcp@dns-test-service.e2e-tests-dns-wkjxk.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wkjxk.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wkjxk.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 27 13:06:38.443: INFO: DNS probes using e2e-tests-dns-wkjxk/dns-test-cc508887-4105-11ea-b652-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:06:39.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-wkjxk" for this suite.
Jan 27 13:06:45.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:06:45.971: INFO: namespace: e2e-tests-dns-wkjxk, resource: bindings, ignored listing per whitelist
Jan 27 13:06:46.091: INFO: namespace e2e-tests-dns-wkjxk deletion completed in 6.469053785s

• [SLOW TEST:31.862 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:06:46.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-cdgtp
Jan 27 13:06:56.356: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-cdgtp
STEP: checking the pod's current state and verifying that restartCount is present
Jan 27 13:06:56.365: INFO: Initial restart count of pod liveness-exec is 0
Jan 27 13:07:49.350: INFO: Restart count of pod e2e-tests-container-probe-cdgtp/liveness-exec is now 1 (52.984968939s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:07:49.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-cdgtp" for this suite.
Jan 27 13:07:57.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:07:57.748: INFO: namespace: e2e-tests-container-probe-cdgtp, resource: bindings, ignored listing per whitelist
Jan 27 13:07:57.895: INFO: namespace e2e-tests-container-probe-cdgtp deletion completed in 8.2444559s

• [SLOW TEST:71.804 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:07:57.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 27 13:07:58.250: INFO: Waiting up to 5m0s for pod "pod-09f8946f-4106-11ea-b652-0242ac110005" in namespace "e2e-tests-emptydir-tkk62" to be "success or failure"
Jan 27 13:07:58.289: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.82466ms
Jan 27 13:08:00.309: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059390858s
Jan 27 13:08:02.322: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072390414s
Jan 27 13:08:04.346: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096338435s
Jan 27 13:08:06.408: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158311271s
Jan 27 13:08:08.440: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189483609s
STEP: Saw pod success
Jan 27 13:08:08.440: INFO: Pod "pod-09f8946f-4106-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:08:08.452: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-09f8946f-4106-11ea-b652-0242ac110005 container test-container: 
STEP: delete the pod
Jan 27 13:08:08.753: INFO: Waiting for pod pod-09f8946f-4106-11ea-b652-0242ac110005 to disappear
Jan 27 13:08:08.764: INFO: Pod pod-09f8946f-4106-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:08:08.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tkk62" for this suite.
Jan 27 13:08:15.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:08:15.173: INFO: namespace: e2e-tests-emptydir-tkk62, resource: bindings, ignored listing per whitelist
Jan 27 13:08:15.468: INFO: namespace e2e-tests-emptydir-tkk62 deletion completed in 6.643322029s

• [SLOW TEST:17.573 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:08:15.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:08:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-p887l" for this suite.
Jan 27 13:08:39.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:08:40.076: INFO: namespace: e2e-tests-pods-p887l, resource: bindings, ignored listing per whitelist
Jan 27 13:08:40.133: INFO: namespace e2e-tests-pods-p887l deletion completed in 24.379134906s

• [SLOW TEST:24.663 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:08:40.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 27 13:08:40.417: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 27 13:08:45.437: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:08:47.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-fgx8x" for this suite.
Jan 27 13:08:59.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:09:00.513: INFO: namespace: e2e-tests-replication-controller-fgx8x, resource: bindings, ignored listing per whitelist
Jan 27 13:09:01.460: INFO: namespace e2e-tests-replication-controller-fgx8x deletion completed in 13.504829921s

• [SLOW TEST:21.328 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:09:01.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 27 13:09:26.123: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:26.157: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:28.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:28.173: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:30.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:30.186: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:32.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:32.243: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:34.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:34.177: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:36.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:36.181: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:38.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:38.179: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:40.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:40.184: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:42.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:42.179: INFO: Pod pod-with-poststart-http-hook still exists
Jan 27 13:09:44.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 27 13:09:44.168: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:09:44.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2mxfr" for this suite.
Jan 27 13:10:06.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:10:06.408: INFO: namespace: e2e-tests-container-lifecycle-hook-2mxfr, resource: bindings, ignored listing per whitelist
Jan 27 13:10:06.441: INFO: namespace e2e-tests-container-lifecycle-hook-2mxfr deletion completed in 22.264610803s

• [SLOW TEST:64.980 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:10:06.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-4k82
STEP: Creating a pod to test atomic-volume-subpath
Jan 27 13:10:06.826: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4k82" in namespace "e2e-tests-subpath-hzngc" to be "success or failure"
Jan 27 13:10:06.879: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 52.585836ms
Jan 27 13:10:08.956: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12912183s
Jan 27 13:10:10.987: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160768195s
Jan 27 13:10:13.001: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174026861s
Jan 27 13:10:15.027: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200520832s
Jan 27 13:10:17.139: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312148529s
Jan 27 13:10:19.153: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 12.326052687s
Jan 27 13:10:21.171: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Pending", Reason="", readiness=false. Elapsed: 14.344302707s
Jan 27 13:10:23.185: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 16.358806889s
Jan 27 13:10:25.204: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 18.377293414s
Jan 27 13:10:27.219: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 20.392777107s
Jan 27 13:10:29.234: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 22.407465476s
Jan 27 13:10:31.251: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 24.424552124s
Jan 27 13:10:33.271: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 26.444851344s
Jan 27 13:10:35.297: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 28.470689087s
Jan 27 13:10:37.331: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 30.50474219s
Jan 27 13:10:39.365: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Running", Reason="", readiness=false. Elapsed: 32.538022307s
Jan 27 13:10:41.379: INFO: Pod "pod-subpath-test-secret-4k82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.552344639s
STEP: Saw pod success
Jan 27 13:10:41.379: INFO: Pod "pod-subpath-test-secret-4k82" satisfied condition "success or failure"
Jan 27 13:10:41.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-4k82 container test-container-subpath-secret-4k82: 
STEP: delete the pod
Jan 27 13:10:41.910: INFO: Waiting for pod pod-subpath-test-secret-4k82 to disappear
Jan 27 13:10:41.918: INFO: Pod pod-subpath-test-secret-4k82 no longer exists
STEP: Deleting pod pod-subpath-test-secret-4k82
Jan 27 13:10:41.918: INFO: Deleting pod "pod-subpath-test-secret-4k82" in namespace "e2e-tests-subpath-hzngc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:10:42.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hzngc" for this suite.
Jan 27 13:10:50.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:10:50.742: INFO: namespace: e2e-tests-subpath-hzngc, resource: bindings, ignored listing per whitelist
Jan 27 13:10:50.742: INFO: namespace e2e-tests-subpath-hzngc deletion completed in 8.399359889s

• [SLOW TEST:44.300 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:10:50.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 27 13:10:51.023: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:11:06.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-dn7jt" for this suite.
Jan 27 13:11:12.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:11:12.441: INFO: namespace: e2e-tests-init-container-dn7jt, resource: bindings, ignored listing per whitelist
Jan 27 13:11:12.456: INFO: namespace e2e-tests-init-container-dn7jt deletion completed in 6.350811218s

• [SLOW TEST:21.715 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:11:12.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-7df22e34-4106-11ea-b652-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 27 13:11:12.709: INFO: Waiting up to 5m0s for pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005" in namespace "e2e-tests-secrets-jb4pw" to be "success or failure"
Jan 27 13:11:12.808: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.337246ms
Jan 27 13:11:14.833: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123742909s
Jan 27 13:11:16.851: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141784338s
Jan 27 13:11:18.869: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159662727s
Jan 27 13:11:20.900: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191069979s
Jan 27 13:11:22.925: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.216418558s
STEP: Saw pod success
Jan 27 13:11:22.926: INFO: Pod "pod-secrets-7df3114e-4106-11ea-b652-0242ac110005" satisfied condition "success or failure"
Jan 27 13:11:22.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7df3114e-4106-11ea-b652-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 27 13:11:23.122: INFO: Waiting for pod pod-secrets-7df3114e-4106-11ea-b652-0242ac110005 to disappear
Jan 27 13:11:23.135: INFO: Pod pod-secrets-7df3114e-4106-11ea-b652-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:11:23.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jb4pw" for this suite.
Jan 27 13:11:29.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:11:29.279: INFO: namespace: e2e-tests-secrets-jb4pw, resource: bindings, ignored listing per whitelist
Jan 27 13:11:29.290: INFO: namespace e2e-tests-secrets-jb4pw deletion completed in 6.145107603s

• [SLOW TEST:16.833 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:11:29.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 13:11:29.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 27 13:11:29.539: INFO: stderr: ""
Jan 27 13:11:29.539: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 27 13:11:29.544: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:11:29.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6bms4" for this suite.
Jan 27 13:11:35.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:11:35.738: INFO: namespace: e2e-tests-kubectl-6bms4, resource: bindings, ignored listing per whitelist
Jan 27 13:11:35.793: INFO: namespace e2e-tests-kubectl-6bms4 deletion completed in 6.234884528s

S [SKIPPING] [6.502 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 27 13:11:29.544: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:11:35.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 27 13:11:36.004: INFO: Creating ReplicaSet my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005
Jan 27 13:11:36.029: INFO: Pod name my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005: Found 0 pods out of 1
Jan 27 13:11:41.050: INFO: Pod name my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005: Found 1 pods out of 1
Jan 27 13:11:41.050: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005" is running
Jan 27 13:11:47.088: INFO: Pod "my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005-j55hr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 13:11:36 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 13:11:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 13:11:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 13:11:36 +0000 UTC Reason: Message:}])
Jan 27 13:11:47.088: INFO: Trying to dial the pod
Jan 27 13:11:52.156: INFO: Controller my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005: Got expected result from replica 1 [my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005-j55hr]: "my-hostname-basic-8bd8697d-4106-11ea-b652-0242ac110005-j55hr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:11:52.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-9ccgk" for this suite.
Jan 27 13:11:58.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:11:58.540: INFO: namespace: e2e-tests-replicaset-9ccgk, resource: bindings, ignored listing per whitelist
Jan 27 13:11:58.611: INFO: namespace e2e-tests-replicaset-9ccgk deletion completed in 6.438205638s

• [SLOW TEST:22.818 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 27 13:11:58.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 27 13:11:59.893: INFO: created pod pod-service-account-defaultsa
Jan 27 13:11:59.893: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 27 13:11:59.945: INFO: created pod pod-service-account-mountsa
Jan 27 13:11:59.945: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 27 13:12:00.059: INFO: created pod pod-service-account-nomountsa
Jan 27 13:12:00.059: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 27 13:12:00.106: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 27 13:12:00.106: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 27 13:12:00.131: INFO: created pod pod-service-account-mountsa-mountspec
Jan 27 13:12:00.131: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 27 13:12:00.231: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 27 13:12:00.231: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 27 13:12:00.291: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 27 13:12:00.291: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 27 13:12:00.427: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 27 13:12:00.427: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 27 13:12:01.634: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 27 13:12:01.634: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 27 13:12:01.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-k4cz4" for this suite.
Jan 27 13:12:28.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 27 13:12:28.458: INFO: namespace: e2e-tests-svcaccounts-k4cz4, resource: bindings, ignored listing per whitelist
Jan 27 13:12:28.546: INFO: namespace e2e-tests-svcaccounts-k4cz4 deletion completed in 25.782596861s

• [SLOW TEST:29.935 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSJan 27 13:12:28.547: INFO: Running AfterSuite actions on all nodes
Jan 27 13:12:28.547: INFO: Running AfterSuite actions on node 1
Jan 27 13:12:28.547: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8715.302 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8715.58s)
FAIL