I0101 10:47:18.934316 8 e2e.go:224] Starting e2e run "149d35fd-2c84-11ea-8bf6-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577875638 - Will randomize all specs Will run 201 of 2164 specs Jan 1 10:47:19.307: INFO: >>> kubeConfig: /root/.kube/config Jan 1 10:47:19.312: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 1 10:47:19.332: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 1 10:47:19.371: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 1 10:47:19.371: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 1 10:47:19.371: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 1 10:47:19.387: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 1 10:47:19.387: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 1 10:47:19.387: INFO: e2e test version: v1.13.12 Jan 1 10:47:19.403: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:47:19.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 1 10:47:19.611: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-157e9ee7-2c84-11ea-8bf6-0242ac110005 STEP: Creating a pod to test consume secrets Jan 1 10:47:19.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-wdjcl" to be "success or failure" Jan 1 10:47:19.664: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.948594ms Jan 1 10:47:21.683: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033391548s Jan 1 10:47:23.711: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061299267s Jan 1 10:47:25.725: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074811057s Jan 1 10:47:28.268: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617939945s Jan 1 10:47:30.341: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.69154648s STEP: Saw pod success Jan 1 10:47:30.342: INFO: Pod "pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 10:47:30.381: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 1 10:47:30.761: INFO: Waiting for pod pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005 to disappear Jan 1 10:47:30.854: INFO: Pod pod-projected-secrets-157f2aca-2c84-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:47:30.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wdjcl" for this suite. Jan 1 10:47:36.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:47:37.006: INFO: namespace: e2e-tests-projected-wdjcl, resource: bindings, ignored listing per whitelist Jan 1 10:47:37.030: INFO: namespace e2e-tests-projected-wdjcl deletion completed in 6.162083829s • [SLOW TEST:17.627 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:47:37.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 10:47:37.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-bnqhc" to be "success or failure" Jan 1 10:47:37.299: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.238696ms Jan 1 10:47:39.523: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257089101s Jan 1 10:47:41.557: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291079761s Jan 1 10:47:43.748: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481578886s Jan 1 10:47:45.759: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492681173s Jan 1 10:47:47.774: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507759289s STEP: Saw pod success Jan 1 10:47:47.774: INFO: Pod "downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 10:47:47.779: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005 container client-container: STEP: delete the pod Jan 1 10:47:48.261: INFO: Waiting for pod downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005 to disappear Jan 1 10:47:48.338: INFO: Pod downwardapi-volume-200055aa-2c84-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:47:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bnqhc" for this suite. Jan 1 10:47:54.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:47:55.139: INFO: namespace: e2e-tests-downward-api-bnqhc, resource: bindings, ignored listing per whitelist Jan 1 10:47:55.180: INFO: namespace e2e-tests-downward-api-bnqhc deletion completed in 6.768663445s • [SLOW TEST:18.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:47:55.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 1 10:47:55.326: INFO: Waiting up to 5m0s for pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005" in namespace "e2e-tests-containers-cxr5h" to be "success or failure" Jan 1 10:47:55.464: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 137.565047ms Jan 1 10:47:57.478: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151869569s Jan 1 10:47:59.500: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173148995s Jan 1 10:48:01.517: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190688687s Jan 1 10:48:03.545: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218291139s Jan 1 10:48:06.403: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.076272412s STEP: Saw pod success Jan 1 10:48:06.403: INFO: Pod "client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 10:48:06.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005 container test-container: STEP: delete the pod Jan 1 10:48:06.741: INFO: Waiting for pod client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005 to disappear Jan 1 10:48:06.754: INFO: Pod client-containers-2ac5ab4b-2c84-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:48:06.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-cxr5h" for this suite. Jan 1 10:48:12.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:48:13.150: INFO: namespace: e2e-tests-containers-cxr5h, resource: bindings, ignored listing per whitelist Jan 1 10:48:13.219: INFO: namespace e2e-tests-containers-cxr5h deletion completed in 6.372695442s • [SLOW TEST:18.039 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:48:13.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 1 10:48:13.553: INFO: Waiting up to 5m0s for pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-ds5m8" to be "success or failure" Jan 1 10:48:13.576: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.211875ms Jan 1 10:48:15.729: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175801216s Jan 1 10:48:17.745: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191295865s Jan 1 10:48:19.841: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2880877s Jan 1 10:48:21.911: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357327977s Jan 1 10:48:23.925: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371376814s STEP: Saw pod success Jan 1 10:48:23.925: INFO: Pod "downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 10:48:23.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005 container dapi-container: STEP: delete the pod Jan 1 10:48:24.508: INFO: Waiting for pod downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005 to disappear Jan 1 10:48:24.633: INFO: Pod downward-api-35a2a283-2c84-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:48:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ds5m8" for this suite. Jan 1 10:48:30.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:48:30.935: INFO: namespace: e2e-tests-downward-api-ds5m8, resource: bindings, ignored listing per whitelist Jan 1 10:48:31.018: INFO: namespace e2e-tests-downward-api-ds5m8 deletion completed in 6.359943492s • [SLOW TEST:17.798 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:48:31.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-40319519-2c84-11ea-8bf6-0242ac110005 STEP: Creating a pod to test consume secrets Jan 1 10:48:31.421: INFO: Waiting up to 5m0s for pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-6pc9m" to be "success or failure" Jan 1 10:48:31.442: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.05484ms Jan 1 10:48:33.461: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039278513s Jan 1 10:48:35.605: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183902282s Jan 1 10:48:37.863: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441458699s Jan 1 10:48:40.041: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619945439s Jan 1 10:48:42.052: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.630680215s STEP: Saw pod success Jan 1 10:48:42.052: INFO: Pod "pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 10:48:42.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 1 10:48:42.656: INFO: Waiting for pod pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005 to disappear Jan 1 10:48:42.887: INFO: Pod pod-secrets-40473fbc-2c84-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:48:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6pc9m" for this suite. Jan 1 10:48:48.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:48:49.020: INFO: namespace: e2e-tests-secrets-6pc9m, resource: bindings, ignored listing per whitelist Jan 1 10:48:49.140: INFO: namespace e2e-tests-secrets-6pc9m deletion completed in 6.23765609s STEP: Destroying namespace "e2e-tests-secret-namespace-2df2m" for this suite. Jan 1 10:48:55.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:48:55.233: INFO: namespace: e2e-tests-secret-namespace-2df2m, resource: bindings, ignored listing per whitelist Jan 1 10:48:55.362: INFO: namespace e2e-tests-secret-namespace-2df2m deletion completed in 6.221455887s • [SLOW TEST:24.343 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:48:55.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 1 10:49:13.685: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:13.717: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 10:49:15.718: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:16.346: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 10:49:17.718: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:17.732: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 10:49:19.718: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:19.738: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 10:49:21.718: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:21.736: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 10:49:23.718: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 10:49:23.746: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:49:23.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8vgjv" for this suite. Jan 1 10:49:47.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:49:47.899: INFO: namespace: e2e-tests-container-lifecycle-hook-8vgjv, resource: bindings, ignored listing per whitelist Jan 1 10:49:47.993: INFO: namespace e2e-tests-container-lifecycle-hook-8vgjv deletion completed in 24.225587615s • [SLOW TEST:52.631 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:49:47.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 10:49:48.140: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:49:56.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-crrvr" for this suite. Jan 1 10:50:45.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:50:45.458: INFO: namespace: e2e-tests-pods-crrvr, resource: bindings, ignored listing per whitelist Jan 1 10:50:45.471: INFO: namespace e2e-tests-pods-crrvr deletion completed in 48.523908423s • [SLOW TEST:57.478 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:50:45.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 1 10:51:03.817: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 10:51:03.872: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 10:51:05.872: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 10:51:05.897: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 10:51:07.873: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 10:51:07.993: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 10:51:09.873: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 10:51:09.889: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 10:51:11.873: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 10:51:11.905: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:51:11.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-k858c" for this suite. Jan 1 10:51:36.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:51:36.261: INFO: namespace: e2e-tests-container-lifecycle-hook-k858c, resource: bindings, ignored listing per whitelist Jan 1 10:51:36.311: INFO: namespace e2e-tests-container-lifecycle-hook-k858c deletion completed in 24.281927227s • [SLOW TEST:50.839 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:51:36.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5c5hq [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5c5hq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5c5hq Jan 1 10:51:36.705: INFO: Found 0 stateful pods, waiting for 1 Jan 1 10:51:46.974: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 1 10:51:56.744: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 1 10:51:56.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 10:51:57.439: INFO: stderr: "" Jan 1 10:51:57.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 10:51:57.440: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 10:51:57.465: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 1 10:52:07.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 1 10:52:07.483: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 10:52:07.524: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:07.524: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:07.524: INFO: Jan 1 10:52:07.524: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 1 10:52:09.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986808177s Jan 1 10:52:10.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.459345261s Jan 1 10:52:11.301: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.223910448s Jan 1 10:52:12.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.210463457s Jan 1 10:52:13.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.167983007s Jan 1 10:52:14.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.130355332s Jan 1 10:52:16.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.112835996s Jan 1 10:52:17.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 57.843139ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5c5hq Jan 1 10:52:18.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:52:19.973: INFO: stderr: "" Jan 1 10:52:19.973: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 10:52:19.973: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 10:52:19.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:52:20.173: INFO: rc: 1 Jan 1 10:52:20.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0010965a0 exit status 1 true [0xc0016aad30 0xc0016aada0 0xc0016aadd0] [0xc0016aad30 0xc0016aada0 0xc0016aadd0] [0xc0016aad58 0xc0016aadc0] [0x935700 0x935700] 0xc001e91560 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 1 10:52:30.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:52:30.917: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Jan 1 10:52:30.918: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 10:52:30.918: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 10:52:30.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:52:31.420: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Jan 1 10:52:31.420: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 10:52:31.420: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 10:52:31.434: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 10:52:31.434: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 10:52:31.434: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 1 10:52:31.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 10:52:32.113: INFO: stderr: "" Jan 1 10:52:32.114: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 10:52:32.114: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 10:52:32.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 10:52:32.750: INFO: stderr: "" Jan 1 10:52:32.750: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 10:52:32.750: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 10:52:32.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 10:52:33.221: INFO: stderr: "" Jan 1 10:52:33.221: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 10:52:33.221: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 10:52:33.221: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 10:52:33.254: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 1 10:52:43.332: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 1 10:52:43.332: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 1 10:52:43.332: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 1 10:52:43.500: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:43.500: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:43.500: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:43.500: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:43.500: INFO: Jan 1 10:52:43.500: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:44.955: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:44.956: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:44.956: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:44.956: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:44.956: INFO: Jan 1 10:52:44.956: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:46.020: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:46.020: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:46.020: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:46.020: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:46.020: INFO: Jan 1 10:52:46.020: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:47.050: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:47.050: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:47.051: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:47.051: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:47.051: INFO: Jan 1 10:52:47.051: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:48.065: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:48.065: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:48.066: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:48.066: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:48.066: INFO: Jan 1 10:52:48.066: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:49.095: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:49.096: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:49.096: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:49.096: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:49.096: INFO: Jan 1 10:52:49.096: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:51.005: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:51.006: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:51.006: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:51.006: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:51.006: INFO: Jan 1 10:52:51.006: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:52.031: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:52.032: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:52.032: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:52.032: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:52.032: INFO: Jan 1 10:52:52.032: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 1 10:52:53.100: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 10:52:53.100: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:51:36 +0000 UTC }] Jan 1 10:52:53.100: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:53.100: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 10:52:07 +0000 UTC }] Jan 1 10:52:53.100: INFO: Jan 1 10:52:53.100: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5c5hq Jan 1 10:52:54.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:52:54.278: INFO: rc: 1 Jan 1 10:52:54.278: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001adeff0 exit status 1 true [0xc0004868d8 0xc000486970 0xc000486ad0] [0xc0004868d8 0xc000486970 0xc000486ad0] [0xc000486948 0xc000486ac8] [0x935700 0x935700] 0xc001d4de00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 1 10:53:04.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:04.504: INFO: rc: 1 Jan 1 10:53:04.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001adf140 exit status 1 true [0xc000486ae8 0xc000486b30 0xc000486ba0] [0xc000486ae8 0xc000486b30 0xc000486ba0] [0xc000486b10 0xc000486b78] [0x935700 0x935700] 0xc0015069c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:53:14.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:14.704: INFO: rc: 1 Jan 1 10:53:14.705: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e43b90 exit status 1 true [0xc000a22230 0xc000a222a8 0xc000a222c0] [0xc000a22230 0xc000a222a8 0xc000a222c0] [0xc000a22298 0xc000a222b8] [0x935700 0x935700] 0xc001bdb380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:53:24.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:24.876: INFO: rc: 1 Jan 1 10:53:24.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001848a20 exit status 1 true [0xc0016ab1b0 0xc0016ab1c8 0xc0016ab1e0] [0xc0016ab1b0 0xc0016ab1c8 0xc0016ab1e0] [0xc0016ab1c0 0xc0016ab1d8] [0x935700 0x935700] 0xc000fc9320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:53:34.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:35.050: INFO: rc: 1 Jan 1 10:53:35.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001adf2f0 exit status 1 true [0xc000486c30 0xc000486ca8 0xc000486d68] [0xc000486c30 0xc000486ca8 0xc000486d68] [0xc000486c68 0xc000486d60] [0x935700 0x935700] 0xc001539140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:53:45.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:45.202: INFO: rc: 1 Jan 1 10:53:45.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001848b70 exit status 1 true [0xc0016ab1e8 0xc0016ab228 0xc0016ab248] [0xc0016ab1e8 0xc0016ab228 0xc0016ab248] [0xc0016ab208 0xc0016ab240] [0x935700 0x935700] 0xc000fc9980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:53:55.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:53:55.372: INFO: rc: 1 Jan 1 10:53:55.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001096180 exit status 1 true [0xc000a22000 0xc000a22058 0xc000a220a8] [0xc000a22000 0xc000a22058 0xc000a220a8] [0xc000a22040 0xc000a22098] [0x935700 0x935700] 0xc001a49140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:05.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:05.537: INFO: rc: 1 Jan 1 10:54:05.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0120 exit status 1 true [0xc00001e018 0xc00001e0d0 0xc00001e1c8] [0xc00001e018 0xc00001e0d0 0xc00001e1c8] [0xc00001e0c0 0xc00001e1c0] [0x935700 0x935700] 0xc001c6a420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:15.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:15.717: INFO: rc: 1 Jan 1 10:54:15.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010962d0 exit status 1 true [0xc000a220b8 0xc000a220e8 0xc000a22160] [0xc000a220b8 0xc000a220e8 0xc000a22160] [0xc000a220d0 0xc000a22128] [0x935700 0x935700] 0xc00172fec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:25.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:25.887: INFO: rc: 1 Jan 1 10:54:25.888: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e42120 exit status 1 true [0xc0004860a8 0xc0004860f0 0xc000486178] [0xc0004860a8 0xc0004860f0 0xc000486178] [0xc0004860e8 0xc000486120] [0x935700 0x935700] 0xc001d4c8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:35.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:36.103: INFO: rc: 1 Jan 1 10:54:36.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0270 exit status 1 true [0xc00001e1d8 0xc00001e240 0xc00001e270] [0xc00001e1d8 0xc00001e240 0xc00001e270] [0xc00001e208 0xc00001e268] [0x935700 0x935700] 0xc001c1eba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:46.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:46.371: INFO: rc: 1 Jan 1 10:54:46.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0390 exit status 1 true [0xc00001e280 0xc00001e2b0 0xc00001e310] [0xc00001e280 0xc00001e2b0 0xc00001e310] [0xc00001e2a8 0xc00001e2e0] [0x935700 0x935700] 0xc001c1f3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:54:56.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:54:56.638: INFO: rc: 1 Jan 1 10:54:56.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf04e0 exit status 1 true [0xc00001e330 0xc00001e398 0xc00001e3f0] [0xc00001e330 0xc00001e398 0xc00001e3f0] [0xc00001e378 0xc00001e3d8] [0x935700 0x935700] 0xc001c1f860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:06.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:06.798: INFO: rc: 1 Jan 1 10:55:06.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001096480 exit status 1 true [0xc000a22178 0xc000a221c8 0xc000a22230] [0xc000a22178 0xc000a221c8 0xc000a22230] [0xc000a221a0 0xc000a22210] [0x935700 0x935700] 0xc0011baa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:16.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:17.076: INFO: rc: 1 Jan 1 10:55:17.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e42240 exit status 1 true [0xc000486198 0xc000486268 0xc000486390] [0xc000486198 0xc000486268 0xc000486390] [0xc0004861e8 0xc0004862e0] [0x935700 0x935700] 0xc001d4cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:27.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:27.222: INFO: rc: 1 Jan 1 10:55:27.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001518120 exit status 1 true [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa0e8 0xc0016aa170] [0x935700 0x935700] 0xc001c04c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:37.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:37.429: INFO: rc: 1 Jan 1 10:55:37.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001518240 exit status 1 true [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa290 0xc0016aa410] [0x935700 0x935700] 0xc001c05b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:47.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:47.676: INFO: rc: 1 Jan 1 10:55:47.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e42390 exit status 1 true [0xc000486398 0xc000486410 0xc0004864d8] [0xc000486398 0xc000486410 0xc0004864d8] [0xc0004863f8 0xc000486448] [0x935700 0x935700] 0xc001d4d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:55:57.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:55:57.862: INFO: rc: 1 Jan 1 10:55:57.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010961b0 exit status 1 true [0xc00001e018 0xc00001e0d0 0xc00001e1c8] [0xc00001e018 0xc00001e0d0 0xc00001e1c8] [0xc00001e0c0 0xc00001e1c0] [0x935700 0x935700] 0xc001506d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:07.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:08.008: INFO: rc: 1 Jan 1 10:56:08.009: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001096330 exit status 1 true [0xc00001e1d8 0xc00001e240 0xc00001e270] [0xc00001e1d8 0xc00001e240 0xc00001e270] [0xc00001e208 0xc00001e268] [0x935700 0x935700] 0xc001bdb4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:18.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:18.158: INFO: rc: 1 Jan 1 10:56:18.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001518150 exit status 1 true [0xc000a22000 0xc000a22058 0xc000a220a8] [0xc000a22000 0xc000a22058 0xc000a220a8] [0xc000a22040 0xc000a22098] [0x935700 0x935700] 0xc0011ba2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:28.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:28.288: INFO: rc: 1 Jan 1 10:56:28.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0150 exit status 1 true [0xc0004860a8 0xc0004860f0 0xc000486178] [0xc0004860a8 0xc0004860f0 0xc000486178] [0xc0004860e8 0xc000486120] [0x935700 0x935700] 0xc001d4c8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:38.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:38.455: INFO: rc: 1 Jan 1 10:56:38.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf02d0 exit status 1 true [0xc000486198 0xc000486268 0xc000486390] [0xc000486198 0xc000486268 0xc000486390] [0xc0004861e8 0xc0004862e0] [0x935700 0x935700] 0xc001d4cc60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:48.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:48.550: INFO: rc: 1 Jan 1 10:56:48.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0420 exit status 1 true [0xc000486398 0xc000486410 0xc0004864d8] [0xc000486398 0xc000486410 0xc0004864d8] [0xc0004863f8 0xc000486448] [0x935700 0x935700] 0xc001d4d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:56:58.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:56:58.741: INFO: rc: 1 Jan 1 10:56:58.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0570 exit status 1 true [0xc000486590 0xc000486620 0xc000486650] [0xc000486590 0xc000486620 0xc000486650] [0xc0004865d8 0xc000486648] [0x935700 0x935700] 0xc001d4d560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:08.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:08.912: INFO: rc: 1 Jan 1 10:57:08.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e421b0 exit status 1 true [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa0e8 0xc0016aa170] [0x935700 0x935700] 0xc001c04c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:18.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:19.088: INFO: rc: 1 Jan 1 10:57:19.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001096450 exit status 1 true [0xc00001e280 0xc00001e2b0 0xc00001e310] [0xc00001e280 0xc00001e2b0 0xc00001e310] [0xc00001e2a8 0xc00001e2e0] [0x935700 0x935700] 0xc001c1e840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:29.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:29.225: INFO: rc: 1 Jan 1 10:57:29.225: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cf0690 exit status 1 true [0xc000486660 0xc0004866c0 0xc0004866f0] [0xc000486660 0xc0004866c0 0xc0004866f0] [0xc000486690 0xc0004866e0] [0x935700 0x935700] 0xc001d4d8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:39.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:39.368: INFO: rc: 1 Jan 1 10:57:39.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001518360 exit status 1 true [0xc000a220b8 0xc000a220e8 0xc000a22160] [0xc000a220b8 0xc000a220e8 0xc000a22160] [0xc000a220d0 0xc000a22128] [0x935700 0x935700] 0xc0011baba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:49.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:49.508: INFO: rc: 1 Jan 1 10:57:49.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e42330 exit status 1 true [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa290 0xc0016aa410] [0x935700 0x935700] 0xc001c05b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 1 10:57:59.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5c5hq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 10:57:59.684: INFO: rc: 1 Jan 1 10:57:59.685: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 1 10:57:59.685: INFO: Scaling statefulset ss to 0 Jan 1 10:57:59.721: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 1 10:57:59.724: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5c5hq Jan 1 10:57:59.728: INFO: Scaling statefulset ss to 0 Jan 1 10:57:59.739: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 10:57:59.742: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:57:59.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5c5hq" for this suite. Jan 1 10:58:07.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:58:07.992: INFO: namespace: e2e-tests-statefulset-5c5hq, resource: bindings, ignored listing per whitelist Jan 1 10:58:08.002: INFO: namespace e2e-tests-statefulset-5c5hq deletion completed in 8.216257652s • [SLOW TEST:391.691 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:58:08.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 1 10:58:08.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:10.201: INFO: stderr: "" Jan 1 10:58:10.202: INFO: stdout: "pod/pause created\n" Jan 1 10:58:10.202: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 1 10:58:10.202: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-697bw" to be "running and ready" Jan 1 10:58:10.217: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.846468ms Jan 1 10:58:12.239: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0366685s Jan 1 10:58:14.249: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04667473s Jan 1 10:58:16.266: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063602704s Jan 1 10:58:18.284: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.081499132s Jan 1 10:58:18.284: INFO: Pod "pause" satisfied condition "running and ready" Jan 1 10:58:18.284: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 1 10:58:18.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:18.651: INFO: stderr: "" Jan 1 10:58:18.652: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 1 10:58:18.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:18.793: INFO: stderr: "" Jan 1 10:58:18.793: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 1 10:58:18.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:18.904: INFO: stderr: "" Jan 1 10:58:18.904: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 1 10:58:18.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:19.030: INFO: stderr: "" Jan 1 10:58:19.030: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 1 10:58:19.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:19.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 10:58:19.190: INFO: stdout: "pod \"pause\" force deleted\n" Jan 1 10:58:19.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-697bw' Jan 1 10:58:19.370: INFO: stderr: "No resources found.\n" Jan 1 10:58:19.370: INFO: stdout: "" Jan 1 10:58:19.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-697bw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 10:58:19.468: INFO: stderr: "" Jan 1 10:58:19.469: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 10:58:19.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-697bw" for this suite. Jan 1 10:58:25.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 10:58:25.567: INFO: namespace: e2e-tests-kubectl-697bw, resource: bindings, ignored listing per whitelist Jan 1 10:58:25.698: INFO: namespace e2e-tests-kubectl-697bw deletion completed in 6.217315175s • [SLOW TEST:17.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 10:58:25.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 1 11:01:31.720: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:31.776: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:33.778: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:33.817: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:35.778: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:35.800: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:37.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:37.796: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:39.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:39.794: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:41.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:41.822: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:43.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:43.800: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:45.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:45.843: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:47.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:47.794: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:49.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:50.121: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:51.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:51.814: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:53.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:53.799: INFO: Pod pod-with-poststart-exec-hook still exists Jan 1 11:01:55.777: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 1 11:01:55.799: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:01:55.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lhxbq" for this suite. Jan 1 11:02:19.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:02:20.037: INFO: namespace: e2e-tests-container-lifecycle-hook-lhxbq, resource: bindings, ignored listing per whitelist Jan 1 11:02:20.075: INFO: namespace e2e-tests-container-lifecycle-hook-lhxbq deletion completed in 24.263221578s • [SLOW TEST:234.376 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:02:20.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9wptg [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 1 11:02:20.411: INFO: Found 0 stateful pods, waiting for 3 Jan 1 11:02:30.467: INFO: Found 1 stateful pods, waiting for 3 Jan 1 11:02:40.437: INFO: Found 2 stateful pods, waiting for 3 Jan 1 11:02:50.447: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 11:02:50.447: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 11:02:50.447: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 1 11:03:00.444: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 1 11:03:00.444: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 1 11:03:00.444: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 1 11:03:00.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wptg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 11:03:01.045: INFO: stderr: "" Jan 1 11:03:01.045: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 11:03:01.045: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 1 11:03:11.179: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 1 11:03:21.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wptg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 11:03:21.920: INFO: stderr: "" Jan 1 11:03:21.920: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 11:03:21.920: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 11:03:32.076: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:03:32.076: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:03:32.076: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:03:42.114: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:03:42.114: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:03:42.114: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:03:52.093: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:03:52.093: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:03:52.093: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:04:02.133: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:04:02.133: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 1 11:04:12.647: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update STEP: Rolling back to a previous revision Jan 1 11:04:22.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wptg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 1 11:04:23.098: INFO: stderr: "" Jan 1 11:04:23.099: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 1 11:04:23.099: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 1 11:04:33.208: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 1 11:04:43.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wptg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 1 11:04:44.677: INFO: stderr: "" Jan 1 11:04:44.677: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 1 11:04:44.677: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 1 11:04:44.856: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:04:44.856: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:04:44.856: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:04:44.856: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:04:54.913: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:04:54.913: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:04:54.913: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:05:04.883: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:05:04.884: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:05:04.884: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:05:15.054: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:05:15.054: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:05:24.881: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update Jan 1 11:05:24.881: INFO: Waiting for Pod e2e-tests-statefulset-9wptg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 1 11:05:34.892: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wptg/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 1 11:05:44.879: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9wptg Jan 1 11:05:44.885: INFO: Scaling statefulset ss2 to 0 Jan 1 11:06:14.948: INFO: Waiting for statefulset status.replicas updated to 0 Jan 1 11:06:14.957: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:06:14.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9wptg" for this suite. Jan 1 11:06:23.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:06:23.122: INFO: namespace: e2e-tests-statefulset-9wptg, resource: bindings, ignored listing per whitelist Jan 1 11:06:23.168: INFO: namespace e2e-tests-statefulset-9wptg deletion completed in 8.152743612s • [SLOW TEST:243.094 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:06:23.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 1 11:06:35.559: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-bf3bea07-2c86-11ea-8bf6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-dq9sz", SelfLink:"/api/v1/namespaces/e2e-tests-pods-dq9sz/pods/pod-submit-remove-bf3bea07-2c86-11ea-8bf6-0242ac110005", UID:"bf3fe935-2c86-11ea-a994-fa163e34d433", ResourceVersion:"16785048", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713473583, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"383633366", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8hx64", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fcf040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8hx64", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b97d78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fd7680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b97db0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b97dd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b97dd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b97ddc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713473583, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713473593, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713473593, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713473583, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001a74360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001a74380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://47323a0702824938af72c60d006b0e4d03cf5a0b10e86f0276e9e7860be85bf7"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:06:52.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dq9sz" for this suite. Jan 1 11:06:58.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:06:58.817: INFO: namespace: e2e-tests-pods-dq9sz, resource: bindings, ignored listing per whitelist Jan 1 11:06:58.894: INFO: namespace e2e-tests-pods-dq9sz deletion completed in 6.215494979s • [SLOW TEST:35.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:06:58.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 11:07:25.179: INFO: Container started at 2020-01-01 11:07:07 +0000 UTC, pod became ready at 2020-01-01 11:07:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:07:25.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zlq5q" for this suite. Jan 1 11:07:49.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:07:49.284: INFO: namespace: e2e-tests-container-probe-zlq5q, resource: bindings, ignored listing per whitelist Jan 1 11:07:49.438: INFO: namespace e2e-tests-container-probe-zlq5q deletion completed in 24.249414468s • [SLOW TEST:50.543 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:07:49.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-s4rcj Jan 1 11:07:59.674: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-s4rcj STEP: checking the pod's current state and verifying that restartCount is present Jan 1 11:07:59.679: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:12:00.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s4rcj" for this suite. Jan 1 11:12:06.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:12:06.647: INFO: namespace: e2e-tests-container-probe-s4rcj, resource: bindings, ignored listing per whitelist Jan 1 11:12:06.933: INFO: namespace e2e-tests-container-probe-s4rcj deletion completed in 6.410801685s • [SLOW TEST:257.494 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:12:06.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 11:12:07.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-6jfpj" to be "success or failure" Jan 1 11:12:07.206: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33389ms Jan 1 11:12:09.226: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03034638s Jan 1 11:12:11.243: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047160141s Jan 1 11:12:13.469: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273303042s Jan 1 11:12:15.484: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288357991s Jan 1 11:12:17.505: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309312656s STEP: Saw pod success Jan 1 11:12:17.506: INFO: Pod "downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 11:12:17.510: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005 container client-container: STEP: delete the pod Jan 1 11:12:18.193: INFO: Waiting for pod downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005 to disappear Jan 1 11:12:18.546: INFO: Pod downwardapi-volume-8c27b2a4-2c87-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:12:18.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6jfpj" for this suite. Jan 1 11:12:24.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:12:24.871: INFO: namespace: e2e-tests-downward-api-6jfpj, resource: bindings, ignored listing per whitelist Jan 1 11:12:24.873: INFO: namespace e2e-tests-downward-api-6jfpj deletion completed in 6.297109833s • [SLOW TEST:17.939 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:12:24.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 1 11:12:35.682: INFO: Successfully updated pod "labelsupdate96ca83c7-2c87-11ea-8bf6-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:12:37.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h9q5l" for this suite. Jan 1 11:13:01.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:13:01.960: INFO: namespace: e2e-tests-projected-h9q5l, resource: bindings, ignored listing per whitelist Jan 1 11:13:02.290: INFO: namespace e2e-tests-projected-h9q5l deletion completed in 24.413611881s • [SLOW TEST:37.417 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:13:02.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 1 11:13:02.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-brh4g" to be "success or failure" Jan 1 11:13:02.624: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640253ms Jan 1 11:13:04.649: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03515671s Jan 1 11:13:06.690: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075640623s Jan 1 11:13:08.726: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112399674s Jan 1 11:13:10.779: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165098803s Jan 1 11:13:12.796: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182127246s STEP: Saw pod success Jan 1 11:13:12.796: INFO: Pod "downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 11:13:12.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005 container client-container: STEP: delete the pod Jan 1 11:13:13.421: INFO: Waiting for pod downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005 to disappear Jan 1 11:13:13.975: INFO: Pod downwardapi-volume-ad29c8fa-2c87-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:13:13.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-brh4g" for this suite. Jan 1 11:13:20.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:13:20.204: INFO: namespace: e2e-tests-projected-brh4g, resource: bindings, ignored listing per whitelist Jan 1 11:13:20.230: INFO: namespace e2e-tests-projected-brh4g deletion completed in 6.233101568s • [SLOW TEST:17.940 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:13:20.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 1 11:13:20.686: INFO: Waiting up to 5m0s for pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-9dpnh" to be "success or failure" Jan 1 11:13:20.701: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.574901ms Jan 1 11:13:22.742: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055672962s Jan 1 11:13:24.761: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074371278s Jan 1 11:13:26.784: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097036262s Jan 1 11:13:28.798: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111184141s Jan 1 11:13:30.830: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142906057s STEP: Saw pod success Jan 1 11:13:30.830: INFO: Pod "pod-b7f16476-2c87-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 11:13:30.849: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b7f16476-2c87-11ea-8bf6-0242ac110005 container test-container: STEP: delete the pod Jan 1 11:13:31.004: INFO: Waiting for pod pod-b7f16476-2c87-11ea-8bf6-0242ac110005 to disappear Jan 1 11:13:31.015: INFO: Pod pod-b7f16476-2c87-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:13:31.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9dpnh" for this suite. Jan 1 11:13:37.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:13:37.161: INFO: namespace: e2e-tests-emptydir-9dpnh, resource: bindings, ignored listing per whitelist Jan 1 11:13:37.236: INFO: namespace e2e-tests-emptydir-9dpnh deletion completed in 6.149351581s • [SLOW TEST:17.005 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:13:37.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 1 11:13:47.796: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:14:29.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-pnlnn" for this suite. Jan 1 11:14:35.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:14:35.148: INFO: namespace: e2e-tests-namespaces-pnlnn, resource: bindings, ignored listing per whitelist Jan 1 11:14:35.259: INFO: namespace e2e-tests-namespaces-pnlnn deletion completed in 6.240604069s STEP: Destroying namespace "e2e-tests-nsdeletetest-hh8bt" for this suite. Jan 1 11:14:35.262: INFO: Namespace e2e-tests-nsdeletetest-hh8bt was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-prgvd" for this suite. Jan 1 11:14:41.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:14:41.464: INFO: namespace: e2e-tests-nsdeletetest-prgvd, resource: bindings, ignored listing per whitelist Jan 1 11:14:41.467: INFO: namespace e2e-tests-nsdeletetest-prgvd deletion completed in 6.205392474s • [SLOW TEST:64.231 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:14:41.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 1 11:14:41.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-77p7j' Jan 1 11:14:43.448: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 1 11:14:43.448: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 1 11:14:47.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-77p7j' Jan 1 11:14:47.756: INFO: stderr: "" Jan 1 11:14:47.756: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:14:47.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-77p7j" for this suite. Jan 1 11:15:09.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:15:10.103: INFO: namespace: e2e-tests-kubectl-77p7j, resource: bindings, ignored listing per whitelist Jan 1 11:15:10.190: INFO: namespace e2e-tests-kubectl-77p7j deletion completed in 22.388142387s • [SLOW TEST:28.722 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:15:10.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-f95dbfcf-2c87-11ea-8bf6-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f95dbfcf-2c87-11ea-8bf6-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:16:51.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6l9sd" for this suite. Jan 1 11:17:16.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:17:16.071: INFO: namespace: e2e-tests-configmap-6l9sd, resource: bindings, ignored listing per whitelist Jan 1 11:17:16.411: INFO: namespace e2e-tests-configmap-6l9sd deletion completed in 24.494135992s • [SLOW TEST:126.221 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:17:16.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 1 11:17:17.867: INFO: Pod name wrapped-volume-race-453edfc5-2c88-11ea-8bf6-0242ac110005: Found 0 pods out of 5 Jan 1 11:17:22.890: INFO: Pod name wrapped-volume-race-453edfc5-2c88-11ea-8bf6-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-453edfc5-2c88-11ea-8bf6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-2cp4l, will wait for the garbage collector to delete the pods Jan 1 11:19:37.068: INFO: Deleting ReplicationController wrapped-volume-race-453edfc5-2c88-11ea-8bf6-0242ac110005 took: 66.683669ms Jan 1 11:19:37.569: INFO: Terminating ReplicationController wrapped-volume-race-453edfc5-2c88-11ea-8bf6-0242ac110005 pods took: 500.823922ms STEP: Creating RC which spawns configmap-volume pods Jan 1 11:20:23.268: INFO: Pod name wrapped-volume-race-b3c930fd-2c88-11ea-8bf6-0242ac110005: Found 0 pods out of 5 Jan 1 11:20:28.306: INFO: Pod name wrapped-volume-race-b3c930fd-2c88-11ea-8bf6-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b3c930fd-2c88-11ea-8bf6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-2cp4l, will wait for the garbage collector to delete the pods Jan 1 11:22:32.500: INFO: Deleting ReplicationController wrapped-volume-race-b3c930fd-2c88-11ea-8bf6-0242ac110005 took: 44.376491ms Jan 1 11:22:33.002: INFO: Terminating ReplicationController wrapped-volume-race-b3c930fd-2c88-11ea-8bf6-0242ac110005 pods took: 501.532513ms STEP: Creating RC which spawns configmap-volume pods Jan 1 11:23:23.948: INFO: Pod name wrapped-volume-race-1f645f29-2c89-11ea-8bf6-0242ac110005: Found 0 pods out of 5 Jan 1 11:23:28.973: INFO: Pod name wrapped-volume-race-1f645f29-2c89-11ea-8bf6-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1f645f29-2c89-11ea-8bf6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-2cp4l, will wait for the garbage collector to delete the pods Jan 1 11:25:13.186: INFO: Deleting ReplicationController wrapped-volume-race-1f645f29-2c89-11ea-8bf6-0242ac110005 took: 26.543437ms Jan 1 11:25:13.687: INFO: Terminating ReplicationController wrapped-volume-race-1f645f29-2c89-11ea-8bf6-0242ac110005 pods took: 501.312486ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:26:04.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2cp4l" for this suite. Jan 1 11:26:16.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:26:16.311: INFO: namespace: e2e-tests-emptydir-wrapper-2cp4l, resource: bindings, ignored listing per whitelist Jan 1 11:26:16.332: INFO: namespace e2e-tests-emptydir-wrapper-2cp4l deletion completed in 12.284978486s • [SLOW TEST:539.921 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:26:16.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 1 11:26:16.654: INFO: namespace e2e-tests-kubectl-9g8rk Jan 1 11:26:16.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9g8rk' Jan 1 11:26:19.365: INFO: stderr: "" Jan 1 11:26:19.366: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 1 11:26:20.403: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:20.403: INFO: Found 0 / 1 Jan 1 11:26:22.060: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:22.060: INFO: Found 0 / 1 Jan 1 11:26:23.255: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:23.256: INFO: Found 0 / 1 Jan 1 11:26:24.633: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:24.634: INFO: Found 0 / 1 Jan 1 11:26:25.598: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:25.598: INFO: Found 0 / 1 Jan 1 11:26:26.391: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:26.391: INFO: Found 0 / 1 Jan 1 11:26:27.414: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:27.415: INFO: Found 0 / 1 Jan 1 11:26:28.388: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:28.388: INFO: Found 0 / 1 Jan 1 11:26:29.524: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:29.524: INFO: Found 0 / 1 Jan 1 11:26:30.470: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:30.470: INFO: Found 0 / 1 Jan 1 11:26:31.390: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:31.390: INFO: Found 0 / 1 Jan 1 11:26:32.375: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:32.375: INFO: Found 1 / 1 Jan 1 11:26:32.375: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 1 11:26:32.379: INFO: Selector matched 1 pods for map[app:redis] Jan 1 11:26:32.379: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 1 11:26:32.379: INFO: wait on redis-master startup in e2e-tests-kubectl-9g8rk Jan 1 11:26:32.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8qmhn redis-master --namespace=e2e-tests-kubectl-9g8rk' Jan 1 11:26:32.743: INFO: stderr: "" Jan 1 11:26:32.744: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jan 11:26:31.030 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 11:26:31.030 # Server started, Redis version 3.2.12\n1:M 01 Jan 11:26:31.031 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 11:26:31.031 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 1 11:26:32.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-9g8rk' Jan 1 11:26:33.008: INFO: stderr: "" Jan 1 11:26:33.008: INFO: stdout: "service/rm2 exposed\n" Jan 1 11:26:33.019: INFO: Service rm2 in namespace e2e-tests-kubectl-9g8rk found. STEP: exposing service Jan 1 11:26:35.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-9g8rk' Jan 1 11:26:35.468: INFO: stderr: "" Jan 1 11:26:35.468: INFO: stdout: "service/rm3 exposed\n" Jan 1 11:26:35.514: INFO: Service rm3 in namespace e2e-tests-kubectl-9g8rk found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:26:37.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9g8rk" for this suite. Jan 1 11:26:55.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:26:55.832: INFO: namespace: e2e-tests-kubectl-9g8rk, resource: bindings, ignored listing per whitelist Jan 1 11:26:55.858: INFO: namespace e2e-tests-kubectl-9g8rk deletion completed in 18.274172077s • [SLOW TEST:39.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:26:55.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jan 1 11:26:56.073: INFO: Waiting up to 5m0s for pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005" in namespace "e2e-tests-containers-6jdpp" to be "success or failure" Jan 1 11:26:56.081: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.933498ms Jan 1 11:26:58.096: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022677094s Jan 1 11:27:00.113: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039466574s Jan 1 11:27:02.135: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061677213s Jan 1 11:27:04.156: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083082816s Jan 1 11:27:06.173: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099503977s STEP: Saw pod success Jan 1 11:27:06.173: INFO: Pod "client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005" satisfied condition "success or failure" Jan 1 11:27:06.179: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005 container test-container: STEP: delete the pod Jan 1 11:27:06.784: INFO: Waiting for pod client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005 to disappear Jan 1 11:27:07.019: INFO: Pod client-containers-9df6793c-2c89-11ea-8bf6-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:27:07.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6jdpp" for this suite. Jan 1 11:27:13.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:27:13.355: INFO: namespace: e2e-tests-containers-6jdpp, resource: bindings, ignored listing per whitelist Jan 1 11:27:13.452: INFO: namespace e2e-tests-containers-6jdpp deletion completed in 6.409600366s • [SLOW TEST:17.593 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:27:13.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 1 11:27:13.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:14.225: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 1 11:27:14.226: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 1 11:27:14.242: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 1 11:27:14.365: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 1 11:27:14.472: INFO: scanned /root for discovery docs: Jan 1 11:27:14.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:40.014: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 1 11:27:40.015: INFO: stdout: "Created e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5\nScaling up e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 1 11:27:40.015: INFO: stdout: "Created e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5\nScaling up e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 1 11:27:40.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:40.236: INFO: stderr: "" Jan 1 11:27:40.237: INFO: stdout: "e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5-tf866 e2e-test-nginx-rc-5nc9x " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 1 11:27:45.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:45.460: INFO: stderr: "" Jan 1 11:27:45.460: INFO: stdout: "e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5-tf866 " Jan 1 11:27:45.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5-tf866 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:45.670: INFO: stderr: "" Jan 1 11:27:45.670: INFO: stdout: "true" Jan 1 11:27:45.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5-tf866 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:45.835: INFO: stderr: "" Jan 1 11:27:45.836: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 1 11:27:45.836: INFO: e2e-test-nginx-rc-4143bdf8c0401b68214be24685add9e5-tf866 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 1 11:27:45.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fwxtv' Jan 1 11:27:46.010: INFO: stderr: "" Jan 1 11:27:46.011: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 1 11:27:46.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fwxtv" for this suite. Jan 1 11:28:10.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 11:28:10.383: INFO: namespace: e2e-tests-kubectl-fwxtv, resource: bindings, ignored listing per whitelist Jan 1 11:28:10.638: INFO: namespace e2e-tests-kubectl-fwxtv deletion completed in 24.614299221s • [SLOW TEST:57.185 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 1 11:28:10.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 1 11:28:10.777: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 10.122254ms)
Jan  1 11:28:10.783: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.341805ms)
Jan  1 11:28:10.833: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.566473ms)
Jan  1 11:28:10.841: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.41495ms)
Jan  1 11:28:10.847: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.774479ms)
Jan  1 11:28:10.852: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.738232ms)
Jan  1 11:28:10.857: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.454738ms)
Jan  1 11:28:10.862: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.223301ms)
Jan  1 11:28:10.869: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.768133ms)
Jan  1 11:28:10.874: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.176143ms)
Jan  1 11:28:10.879: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.405757ms)
Jan  1 11:28:10.884: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.101771ms)
Jan  1 11:28:10.889: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.185641ms)
Jan  1 11:28:10.893: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.032437ms)
Jan  1 11:28:10.897: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.271186ms)
Jan  1 11:28:10.902: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.995206ms)
Jan  1 11:28:10.907: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.630486ms)
Jan  1 11:28:10.912: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.246963ms)
Jan  1 11:28:10.917: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.583387ms)
Jan  1 11:28:10.920: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.532468ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:28:10.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-7l6t8" for this suite.
Jan  1 11:28:16.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:28:17.288: INFO: namespace: e2e-tests-proxy-7l6t8, resource: bindings, ignored listing per whitelist
Jan  1 11:28:17.324: INFO: namespace e2e-tests-proxy-7l6t8 deletion completed in 6.399780255s

• [SLOW TEST:6.685 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:28:17.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vr85p
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vr85p
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vr85p
Jan  1 11:28:17.515: INFO: Found 0 stateful pods, waiting for 1
Jan  1 11:28:27.536: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan  1 11:28:37.536: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  1 11:28:37.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 11:28:38.228: INFO: stderr: ""
Jan  1 11:28:38.228: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 11:28:38.228: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 11:28:38.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  1 11:28:48.294: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 11:28:48.294: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 11:28:48.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995786s
Jan  1 11:28:49.386: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988652221s
Jan  1 11:28:50.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.940502998s
Jan  1 11:28:51.418: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.922043564s
Jan  1 11:28:52.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.909199218s
Jan  1 11:28:53.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.860253903s
Jan  1 11:28:54.533: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.843436491s
Jan  1 11:28:55.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.793603514s
Jan  1 11:28:56.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.647153334s
Jan  1 11:28:57.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 619.055243ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vr85p
Jan  1 11:28:58.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:28:59.306: INFO: stderr: ""
Jan  1 11:28:59.306: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 11:28:59.306: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 11:28:59.340: INFO: Found 1 stateful pods, waiting for 3
Jan  1 11:29:09.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 11:29:09.357: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 11:29:09.357: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 11:29:19.362: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 11:29:19.362: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 11:29:19.362: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  1 11:29:19.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 11:29:20.034: INFO: stderr: ""
Jan  1 11:29:20.035: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 11:29:20.035: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 11:29:20.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 11:29:20.779: INFO: stderr: ""
Jan  1 11:29:20.780: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 11:29:20.780: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 11:29:20.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 11:29:21.476: INFO: stderr: ""
Jan  1 11:29:21.476: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 11:29:21.476: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 11:29:21.476: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 11:29:21.494: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  1 11:29:31.581: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 11:29:31.581: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 11:29:31.581: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 11:29:31.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999563s
Jan  1 11:29:32.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983565915s
Jan  1 11:29:33.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947073659s
Jan  1 11:29:34.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924399295s
Jan  1 11:29:35.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.901450464s
Jan  1 11:29:36.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.886645333s
Jan  1 11:29:37.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.863127249s
Jan  1 11:29:38.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.824626978s
Jan  1 11:29:39.822: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.797085577s
Jan  1 11:29:40.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 771.222135ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vr85p
Jan  1 11:29:41.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:29:42.689: INFO: stderr: ""
Jan  1 11:29:42.689: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 11:29:42.689: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 11:29:42.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:29:43.402: INFO: stderr: ""
Jan  1 11:29:43.402: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 11:29:43.402: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 11:29:43.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:29:43.944: INFO: rc: 126
Jan  1 11:29:43.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc001e42cf0 exit status 126   true [0xc0016abfc0 0xc000d64000 0xc000d64018] [0xc0016abfc0 0xc000d64000 0xc000d64018] [0xc0016abff8 0xc000d64010] [0x935700 0x935700] 0xc001da1440 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Jan  1 11:29:53.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:29:54.291: INFO: rc: 1
Jan  1 11:29:54.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b50120 exit status 1   true [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa0e8 0xc0016aa170] [0x935700 0x935700] 0xc001ec21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:04.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:04.461: INFO: rc: 1
Jan  1 11:30:04.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c120 exit status 1   true [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc010 0xc0012dc028] [0x935700 0x935700] 0xc0015481e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:14.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:14.715: INFO: rc: 1
Jan  1 11:30:14.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b503c0 exit status 1   true [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa290 0xc0016aa410] [0x935700 0x935700] 0xc001ec2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:24.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:25.095: INFO: rc: 1
Jan  1 11:30:25.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c240 exit status 1   true [0xc0012dc038 0xc0012dc050 0xc0012dc068] [0xc0012dc038 0xc0012dc050 0xc0012dc068] [0xc0012dc048 0xc0012dc060] [0x935700 0x935700] 0xc001548480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:35.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:35.209: INFO: rc: 1
Jan  1 11:30:35.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c360 exit status 1   true [0xc0012dc070 0xc0012dc088 0xc0012dc0b0] [0xc0012dc070 0xc0012dc088 0xc0012dc0b0] [0xc0012dc080 0xc0012dc098] [0x935700 0x935700] 0xc0015488a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:45.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:45.348: INFO: rc: 1
Jan  1 11:30:45.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c480 exit status 1   true [0xc0012dc0b8 0xc0012dc0d0 0xc0012dc0e8] [0xc0012dc0b8 0xc0012dc0d0 0xc0012dc0e8] [0xc0012dc0c8 0xc0012dc0e0] [0x935700 0x935700] 0xc001548d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:30:55.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:30:55.516: INFO: rc: 1
Jan  1 11:30:55.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427140 exit status 1   true [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074048 0xc001074080] [0x935700 0x935700] 0xc00161a7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:05.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:05.650: INFO: rc: 1
Jan  1 11:31:05.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427260 exit status 1   true [0xc0010740e0 0xc001074100 0xc001074160] [0xc0010740e0 0xc001074100 0xc001074160] [0xc0010740f8 0xc001074158] [0x935700 0x935700] 0xc00161aa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:15.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:15.811: INFO: rc: 1
Jan  1 11:31:15.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427380 exit status 1   true [0xc001074180 0xc0010741d8 0xc0010741f8] [0xc001074180 0xc0010741d8 0xc0010741f8] [0xc0010741c0 0xc0010741f0] [0x935700 0x935700] 0xc00161af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:25.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:25.978: INFO: rc: 1
Jan  1 11:31:25.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b50630 exit status 1   true [0xc0016aa4a8 0xc0016aa508 0xc0016aa5c8] [0xc0016aa4a8 0xc0016aa508 0xc0016aa5c8] [0xc0016aa4f0 0xc0016aa5b0] [0x935700 0x935700] 0xc001ec2720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:35.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:36.140: INFO: rc: 1
Jan  1 11:31:36.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c660 exit status 1   true [0xc0012dc0f0 0xc0012dc108 0xc0012dc128] [0xc0012dc0f0 0xc0012dc108 0xc0012dc128] [0xc0012dc100 0xc0012dc118] [0x935700 0x935700] 0xc001549020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:46.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:46.313: INFO: rc: 1
Jan  1 11:31:46.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b50a20 exit status 1   true [0xc0016aa5f0 0xc0016aa718 0xc0016aa798] [0xc0016aa5f0 0xc0016aa718 0xc0016aa798] [0xc0016aa6a8 0xc0016aa788] [0x935700 0x935700] 0xc001ec2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:31:56.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:31:56.471: INFO: rc: 1
Jan  1 11:31:56.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c150 exit status 1   true [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa0e8 0xc0016aa170] [0x935700 0x935700] 0xc0015481e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:06.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:06.646: INFO: rc: 1
Jan  1 11:32:06.647: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427110 exit status 1   true [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc010 0xc0012dc028] [0x935700 0x935700] 0xc001ec21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:16.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:16.832: INFO: rc: 1
Jan  1 11:32:16.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184aea0 exit status 1   true [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074048 0xc001074080] [0x935700 0x935700] 0xc00161a7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:26.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:26.933: INFO: rc: 1
Jan  1 11:32:26.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c2a0 exit status 1   true [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa248 0xc0016aa388 0xc0016aa448] [0xc0016aa290 0xc0016aa410] [0x935700 0x935700] 0xc001548480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:36.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:37.230: INFO: rc: 1
Jan  1 11:32:37.230: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184b080 exit status 1   true [0xc0010740e0 0xc001074100 0xc001074160] [0xc0010740e0 0xc001074100 0xc001074160] [0xc0010740f8 0xc001074158] [0x935700 0x935700] 0xc00161aa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:47.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:47.391: INFO: rc: 1
Jan  1 11:32:47.392: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184b1d0 exit status 1   true [0xc001074180 0xc0010741d8 0xc0010741f8] [0xc001074180 0xc0010741d8 0xc0010741f8] [0xc0010741c0 0xc0010741f0] [0x935700 0x935700] 0xc00161af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:32:57.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:32:57.523: INFO: rc: 1
Jan  1 11:32:57.523: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184b380 exit status 1   true [0xc001074200 0xc001074280 0xc0010742d0] [0xc001074200 0xc001074280 0xc0010742d0] [0xc001074250 0xc0010742c0] [0x935700 0x935700] 0xc00161b860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:07.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:07.671: INFO: rc: 1
Jan  1 11:33:07.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b50180 exit status 1   true [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c010 0xc000c6c028] [0x935700 0x935700] 0xc000a763c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:17.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:17.828: INFO: rc: 1
Jan  1 11:33:17.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c450 exit status 1   true [0xc0016aa4a8 0xc0016aa508 0xc0016aa5c8] [0xc0016aa4a8 0xc0016aa508 0xc0016aa5c8] [0xc0016aa4f0 0xc0016aa5b0] [0x935700 0x935700] 0xc0015488a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:27.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:27.996: INFO: rc: 1
Jan  1 11:33:27.996: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184b4d0 exit status 1   true [0xc0010742d8 0xc0010742f8 0xc001074318] [0xc0010742d8 0xc0010742f8 0xc001074318] [0xc0010742e8 0xc001074310] [0x935700 0x935700] 0xc00161bf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:37.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:38.133: INFO: rc: 1
Jan  1 11:33:38.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c5d0 exit status 1   true [0xc0016aa5f0 0xc0016aa718 0xc0016aa798] [0xc0016aa5f0 0xc0016aa718 0xc0016aa798] [0xc0016aa6a8 0xc0016aa788] [0x935700 0x935700] 0xc001548d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:48.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:48.254: INFO: rc: 1
Jan  1 11:33:48.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184b620 exit status 1   true [0xc001074320 0xc001074360 0xc0010743e0] [0xc001074320 0xc001074360 0xc0010743e0] [0xc001074348 0xc0010743b8] [0x935700 0x935700] 0xc0010b4a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:33:58.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:33:58.507: INFO: rc: 1
Jan  1 11:33:58.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427140 exit status 1   true [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc000 0xc0012dc018 0xc0012dc030] [0xc0012dc010 0xc0012dc028] [0x935700 0x935700] 0xc00161a7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:34:08.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:34:08.680: INFO: rc: 1
Jan  1 11:34:08.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00184aed0 exit status 1   true [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa010 0xc0016aa130 0xc0016aa240] [0xc0016aa0e8 0xc0016aa170] [0x935700 0x935700] 0xc001ec21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:34:18.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:34:18.792: INFO: rc: 1
Jan  1 11:34:18.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000427260 exit status 1   true [0xc0012dc038 0xc0012dc050 0xc0012dc068] [0xc0012dc038 0xc0012dc050 0xc0012dc068] [0xc0012dc048 0xc0012dc060] [0x935700 0x935700] 0xc00161aa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:34:28.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:34:28.950: INFO: rc: 1
Jan  1 11:34:28.950: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000b50120 exit status 1   true [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074010 0xc001074058 0xc0010740a8] [0xc001074048 0xc001074080] [0x935700 0x935700] 0xc0015481e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:34:38.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:34:39.137: INFO: rc: 1
Jan  1 11:34:39.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a4c180 exit status 1   true [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c000 0xc000c6c018 0xc000c6c030] [0xc000c6c010 0xc000c6c028] [0x935700 0x935700] 0xc0010b4900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  1 11:34:49.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 11:34:49.368: INFO: rc: 1
Jan  1 11:34:49.369: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  1 11:34:49.369: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 11:34:49.518: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vr85p
Jan  1 11:34:49.529: INFO: Scaling statefulset ss to 0
Jan  1 11:34:49.544: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 11:34:49.550: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:34:49.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vr85p" for this suite.
Jan  1 11:34:57.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:34:57.872: INFO: namespace: e2e-tests-statefulset-vr85p, resource: bindings, ignored listing per whitelist
Jan  1 11:34:57.914: INFO: namespace e2e-tests-statefulset-vr85p deletion completed in 8.278382085s

• [SLOW TEST:400.590 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:34:57.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 11:34:58.139: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  1 11:34:58.160: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  1 11:35:03.187: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 11:35:07.223: INFO: Creating deployment "test-rolling-update-deployment"
Jan  1 11:35:07.243: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  1 11:35:07.270: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  1 11:35:09.294: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  1 11:35:09.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 11:35:11.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 11:35:13.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 11:35:15.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 11:35:17.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475317, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713475307, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 11:35:19.316: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 11:35:19.355: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-xwr8s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xwr8s/deployments/test-rolling-update-deployment,UID:c2b9698f-2c8a-11ea-a994-fa163e34d433,ResourceVersion:16788174,Generation:1,CreationTimestamp:2020-01-01 11:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-01 11:35:07 +0000 UTC 2020-01-01 11:35:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-01 11:35:17 +0000 UTC 2020-01-01 11:35:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  1 11:35:19.366: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-xwr8s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xwr8s/replicasets/test-rolling-update-deployment-75db98fb4c,UID:c2cfa7d7-2c8a-11ea-a994-fa163e34d433,ResourceVersion:16788165,Generation:1,CreationTimestamp:2020-01-01 11:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c2b9698f-2c8a-11ea-a994-fa163e34d433 0xc001d62047 0xc001d62048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 11:35:19.366: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  1 11:35:19.367: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-xwr8s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xwr8s/replicasets/test-rolling-update-controller,UID:bd4ebb5b-2c8a-11ea-a994-fa163e34d433,ResourceVersion:16788173,Generation:2,CreationTimestamp:2020-01-01 11:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c2b9698f-2c8a-11ea-a994-fa163e34d433 0xc000fd9d47 0xc000fd9d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 11:35:19.393: INFO: Pod "test-rolling-update-deployment-75db98fb4c-z4zh7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-z4zh7,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-xwr8s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xwr8s/pods/test-rolling-update-deployment-75db98fb4c-z4zh7,UID:c2d10150-2c8a-11ea-a994-fa163e34d433,ResourceVersion:16788164,Generation:0,CreationTimestamp:2020-01-01 11:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c c2cfa7d7-2c8a-11ea-a994-fa163e34d433 0xc001cf38e7 0xc001cf38e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-42k9g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-42k9g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-42k9g true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cf3950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cf3970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 11:35:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 11:35:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 11:35:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 11:35:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-01 11:35:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-01 11:35:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d3e2ea8ebdcc3f8439ea16edbe2ef40ba73b9d19651d06eba14f38611f37b7dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:35:19.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xwr8s" for this suite.
Jan  1 11:35:27.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:35:27.646: INFO: namespace: e2e-tests-deployment-xwr8s, resource: bindings, ignored listing per whitelist
Jan  1 11:35:27.681: INFO: namespace e2e-tests-deployment-xwr8s deletion completed in 8.270871776s

• [SLOW TEST:29.766 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:35:27.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-cf979c35-2c8a-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 11:35:28.852: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-jgp7q" to be "success or failure"
Jan  1 11:35:28.908: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.285593ms
Jan  1 11:35:30.928: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076086122s
Jan  1 11:35:32.943: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091300642s
Jan  1 11:35:35.355: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503601868s
Jan  1 11:35:37.499: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647537484s
Jan  1 11:35:39.517: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665182185s
STEP: Saw pod success
Jan  1 11:35:39.517: INFO: Pod "pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:35:39.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 11:35:39.797: INFO: Waiting for pod pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:35:39.856: INFO: Pod pod-configmaps-cf99392a-2c8a-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:35:39.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jgp7q" for this suite.
Jan  1 11:35:45.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:35:46.244: INFO: namespace: e2e-tests-configmap-jgp7q, resource: bindings, ignored listing per whitelist
Jan  1 11:35:46.258: INFO: namespace e2e-tests-configmap-jgp7q deletion completed in 6.340704861s

• [SLOW TEST:18.577 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:35:46.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0101 11:36:27.831023       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 11:36:27.831: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:36:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-w98xx" for this suite.
Jan  1 11:36:38.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:36:38.998: INFO: namespace: e2e-tests-gc-w98xx, resource: bindings, ignored listing per whitelist
Jan  1 11:36:39.207: INFO: namespace e2e-tests-gc-w98xx deletion completed in 11.369556057s

• [SLOW TEST:52.949 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:36:39.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-945pv/configmap-test-fa19f9b1-2c8a-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 11:36:40.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-945pv" to be "success or failure"
Jan  1 11:36:40.637: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 233.215431ms
Jan  1 11:36:42.721: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316861765s
Jan  1 11:36:45.573: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.168955627s
Jan  1 11:36:47.589: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.184937973s
Jan  1 11:36:49.840: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.435901707s
Jan  1 11:36:51.873: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.468943931s
Jan  1 11:36:54.642: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.237977329s
Jan  1 11:36:56.668: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.264105894s
Jan  1 11:36:58.726: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.32196107s
STEP: Saw pod success
Jan  1 11:36:58.727: INFO: Pod "pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:36:58.762: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005 container env-test: 
STEP: delete the pod
Jan  1 11:36:58.990: INFO: Waiting for pod pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:36:59.010: INFO: Pod pod-configmaps-fa22b767-2c8a-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:36:59.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-945pv" for this suite.
Jan  1 11:37:05.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:37:05.216: INFO: namespace: e2e-tests-configmap-945pv, resource: bindings, ignored listing per whitelist
Jan  1 11:37:05.229: INFO: namespace e2e-tests-configmap-945pv deletion completed in 6.209006157s

• [SLOW TEST:26.021 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:37:05.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 11:37:05.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-zsvpk'
Jan  1 11:37:07.620: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 11:37:07.620: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  1 11:37:09.748: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pzrg2]
Jan  1 11:37:09.748: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pzrg2" in namespace "e2e-tests-kubectl-zsvpk" to be "running and ready"
Jan  1 11:37:09.763: INFO: Pod "e2e-test-nginx-rc-pzrg2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.756537ms
Jan  1 11:37:11.787: INFO: Pod "e2e-test-nginx-rc-pzrg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039451075s
Jan  1 11:37:14.111: INFO: Pod "e2e-test-nginx-rc-pzrg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363068129s
Jan  1 11:37:16.130: INFO: Pod "e2e-test-nginx-rc-pzrg2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382428581s
Jan  1 11:37:18.158: INFO: Pod "e2e-test-nginx-rc-pzrg2": Phase="Running", Reason="", readiness=true. Elapsed: 8.410260945s
Jan  1 11:37:18.159: INFO: Pod "e2e-test-nginx-rc-pzrg2" satisfied condition "running and ready"
Jan  1 11:37:18.159: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pzrg2]
Jan  1 11:37:18.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zsvpk'
Jan  1 11:37:18.423: INFO: stderr: ""
Jan  1 11:37:18.424: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  1 11:37:18.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-zsvpk'
Jan  1 11:37:18.603: INFO: stderr: ""
Jan  1 11:37:18.603: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:37:18.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zsvpk" for this suite.
Jan  1 11:37:42.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:37:42.785: INFO: namespace: e2e-tests-kubectl-zsvpk, resource: bindings, ignored listing per whitelist
Jan  1 11:37:42.871: INFO: namespace e2e-tests-kubectl-zsvpk deletion completed in 24.189721909s

• [SLOW TEST:37.642 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:37:42.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  1 11:37:51.867: INFO: Successfully updated pod "labelsupdate1fa4edcf-2c8b-11ea-8bf6-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:37:54.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nw8j5" for this suite.
Jan  1 11:38:18.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:38:18.177: INFO: namespace: e2e-tests-downward-api-nw8j5, resource: bindings, ignored listing per whitelist
Jan  1 11:38:18.261: INFO: namespace e2e-tests-downward-api-nw8j5 deletion completed in 24.205303016s

• [SLOW TEST:35.389 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:38:18.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  1 11:38:44.751: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:44.751: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:45.226: INFO: Exec stderr: ""
Jan  1 11:38:45.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:45.227: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:45.574: INFO: Exec stderr: ""
Jan  1 11:38:45.574: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:45.574: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:45.992: INFO: Exec stderr: ""
Jan  1 11:38:45.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:45.993: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:46.260: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  1 11:38:46.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:46.260: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:46.623: INFO: Exec stderr: ""
Jan  1 11:38:46.623: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:46.623: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:46.914: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  1 11:38:46.914: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:46.914: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:47.188: INFO: Exec stderr: ""
Jan  1 11:38:47.189: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:47.189: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:47.546: INFO: Exec stderr: ""
Jan  1 11:38:47.546: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:47.546: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:47.906: INFO: Exec stderr: ""
Jan  1 11:38:47.906: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-flggc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:38:47.907: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:38:48.207: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:38:48.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-flggc" for this suite.
Jan  1 11:39:44.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:39:44.381: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-flggc, resource: bindings, ignored listing per whitelist
Jan  1 11:39:44.485: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-flggc deletion completed in 56.266096768s

• [SLOW TEST:86.224 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:39:44.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 11:39:44.706: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:40:02.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-z7nrl" for this suite.
Jan  1 11:40:08.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:40:08.635: INFO: namespace: e2e-tests-init-container-z7nrl, resource: bindings, ignored listing per whitelist
Jan  1 11:40:08.654: INFO: namespace e2e-tests-init-container-z7nrl deletion completed in 6.3019259s

• [SLOW TEST:24.168 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:40:08.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  1 11:40:08.927: INFO: Waiting up to 5m0s for pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-7dhjq" to be "success or failure"
Jan  1 11:40:08.955: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.735585ms
Jan  1 11:40:10.969: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041699109s
Jan  1 11:40:12.988: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061014713s
Jan  1 11:40:15.299: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372193964s
Jan  1 11:40:17.390: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.462930374s
Jan  1 11:40:19.403: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.475867589s
STEP: Saw pod success
Jan  1 11:40:19.403: INFO: Pod "pod-7677dee7-2c8b-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:40:19.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7677dee7-2c8b-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 11:40:20.088: INFO: Waiting for pod pod-7677dee7-2c8b-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:40:20.117: INFO: Pod pod-7677dee7-2c8b-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:40:20.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7dhjq" for this suite.
Jan  1 11:40:26.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:40:26.247: INFO: namespace: e2e-tests-emptydir-7dhjq, resource: bindings, ignored listing per whitelist
Jan  1 11:40:26.275: INFO: namespace e2e-tests-emptydir-7dhjq deletion completed in 6.151849832s

• [SLOW TEST:17.620 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:40:26.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  1 11:40:27.241: INFO: Waiting up to 5m0s for pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6" in namespace "e2e-tests-svcaccounts-kx5jv" to be "success or failure"
Jan  1 11:40:27.287: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 45.967261ms
Jan  1 11:40:29.526: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284630127s
Jan  1 11:40:32.253: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.011552544s
Jan  1 11:40:34.270: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.029250653s
Jan  1 11:40:36.290: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.048512319s
Jan  1 11:40:38.305: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.064334874s
Jan  1 11:40:40.697: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.456198665s
Jan  1 11:40:42.750: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.508795768s
STEP: Saw pod success
Jan  1 11:40:42.750: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6" satisfied condition "success or failure"
Jan  1 11:40:42.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6 container token-test: 
STEP: delete the pod
Jan  1 11:40:43.032: INFO: Waiting for pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6 to disappear
Jan  1 11:40:43.069: INFO: Pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-c2hk6 no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  1 11:40:43.078: INFO: Waiting up to 5m0s for pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq" in namespace "e2e-tests-svcaccounts-kx5jv" to be "success or failure"
Jan  1 11:40:43.109: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.850669ms
Jan  1 11:40:45.133: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055018379s
Jan  1 11:40:47.149: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070861919s
Jan  1 11:40:49.224: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145265751s
Jan  1 11:40:51.262: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183068143s
Jan  1 11:40:53.346: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267857819s
Jan  1 11:40:55.649: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.570594919s
Jan  1 11:40:57.660: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.581209192s
Jan  1 11:40:59.715: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.636800201s
STEP: Saw pod success
Jan  1 11:40:59.716: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq" satisfied condition "success or failure"
Jan  1 11:40:59.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq container root-ca-test: 
STEP: delete the pod
Jan  1 11:41:00.113: INFO: Waiting for pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq to disappear
Jan  1 11:41:00.124: INFO: Pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-pqqbq no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  1 11:41:00.160: INFO: Waiting up to 5m0s for pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw" in namespace "e2e-tests-svcaccounts-kx5jv" to be "success or failure"
Jan  1 11:41:00.269: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 108.724115ms
Jan  1 11:41:02.283: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122336772s
Jan  1 11:41:04.312: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151994868s
Jan  1 11:41:06.329: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168611534s
Jan  1 11:41:08.724: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.564148357s
Jan  1 11:41:10.738: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.578238974s
Jan  1 11:41:12.757: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.596274327s
Jan  1 11:41:14.810: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.649973549s
STEP: Saw pod success
Jan  1 11:41:14.811: INFO: Pod "pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw" satisfied condition "success or failure"
Jan  1 11:41:14.847: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw container namespace-test: 
STEP: delete the pod
Jan  1 11:41:15.016: INFO: Waiting for pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw to disappear
Jan  1 11:41:15.077: INFO: Pod pod-service-account-81735805-2c8b-11ea-8bf6-0242ac110005-hq9lw no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:41:15.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-kx5jv" for this suite.
Jan  1 11:41:23.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:41:23.233: INFO: namespace: e2e-tests-svcaccounts-kx5jv, resource: bindings, ignored listing per whitelist
Jan  1 11:41:23.572: INFO: namespace e2e-tests-svcaccounts-kx5jv deletion completed in 8.480880964s

• [SLOW TEST:57.297 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:41:23.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-a3261772-2c8b-11ea-8bf6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-a32618c1-2c8b-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a3261772-2c8b-11ea-8bf6-0242ac110005
STEP: Updating secret s-test-opt-upd-a32618c1-2c8b-11ea-8bf6-0242ac110005
STEP: Creating secret with name s-test-opt-create-a326191c-2c8b-11ea-8bf6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:42:57.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bwtb8" for this suite.
Jan  1 11:43:23.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:43:23.348: INFO: namespace: e2e-tests-secrets-bwtb8, resource: bindings, ignored listing per whitelist
Jan  1 11:43:23.398: INFO: namespace e2e-tests-secrets-bwtb8 deletion completed in 26.235792343s

• [SLOW TEST:119.826 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:43:23.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan  1 11:43:23.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  1 11:43:23.869: INFO: stderr: ""
Jan  1 11:43:23.870: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:43:23.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pmljm" for this suite.
Jan  1 11:43:29.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:43:30.047: INFO: namespace: e2e-tests-kubectl-pmljm, resource: bindings, ignored listing per whitelist
Jan  1 11:43:30.080: INFO: namespace e2e-tests-kubectl-pmljm deletion completed in 6.193209694s

• [SLOW TEST:6.681 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:43:30.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:43:36.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-2n58s" for this suite.
Jan  1 11:43:42.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:43:42.878: INFO: namespace: e2e-tests-namespaces-2n58s, resource: bindings, ignored listing per whitelist
Jan  1 11:43:43.015: INFO: namespace e2e-tests-namespaces-2n58s deletion completed in 6.217946174s
STEP: Destroying namespace "e2e-tests-nsdeletetest-8xld4" for this suite.
Jan  1 11:43:43.019: INFO: Namespace e2e-tests-nsdeletetest-8xld4 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-qjffz" for this suite.
Jan  1 11:43:49.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:43:49.226: INFO: namespace: e2e-tests-nsdeletetest-qjffz, resource: bindings, ignored listing per whitelist
Jan  1 11:43:49.267: INFO: namespace e2e-tests-nsdeletetest-qjffz deletion completed in 6.247491118s

• [SLOW TEST:19.187 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:43:49.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-l46kl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-l46kl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 11:44:05.770: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.775: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.782: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.785: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.791: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.794: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.798: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.808: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.823: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.829: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.833: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.837: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005)
Jan  1 11:44:05.837: INFO: Lookups using e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l46kl.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  1 11:44:11.016: INFO: DNS probes using e2e-tests-dns-l46kl/dns-test-f9fa9b9f-2c8b-11ea-8bf6-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:44:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-l46kl" for this suite.
Jan  1 11:44:19.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:44:20.126: INFO: namespace: e2e-tests-dns-l46kl, resource: bindings, ignored listing per whitelist
Jan  1 11:44:20.134: INFO: namespace e2e-tests-dns-l46kl deletion completed in 8.969027754s

• [SLOW TEST:30.866 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:44:20.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-0c61feac-2c8c-11ea-8bf6-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-0c61ffa3-2c8c-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0c61feac-2c8c-11ea-8bf6-0242ac110005
STEP: Updating configmap cm-test-opt-upd-0c61ffa3-2c8c-11ea-8bf6-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-0c61ffcd-2c8c-11ea-8bf6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:44:38.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bjnb8" for this suite.
Jan  1 11:45:20.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:45:21.010: INFO: namespace: e2e-tests-projected-bjnb8, resource: bindings, ignored listing per whitelist
Jan  1 11:45:21.196: INFO: namespace e2e-tests-projected-bjnb8 deletion completed in 42.425387859s

• [SLOW TEST:61.062 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:45:21.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 11:45:21.441: INFO: Waiting up to 5m0s for pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-x7cbp" to be "success or failure"
Jan  1 11:45:21.468: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.443399ms
Jan  1 11:45:23.829: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388003219s
Jan  1 11:45:25.882: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440231693s
Jan  1 11:45:27.896: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455004879s
Jan  1 11:45:29.920: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.478819595s
Jan  1 11:45:31.938: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496567371s
STEP: Saw pod success
Jan  1 11:45:31.938: INFO: Pod "pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:45:31.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 11:45:32.287: INFO: Waiting for pod pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:45:32.669: INFO: Pod pod-30d0b0aa-2c8c-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:45:32.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x7cbp" for this suite.
Jan  1 11:45:38.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:45:39.072: INFO: namespace: e2e-tests-emptydir-x7cbp, resource: bindings, ignored listing per whitelist
Jan  1 11:45:39.183: INFO: namespace e2e-tests-emptydir-x7cbp deletion completed in 6.501442355s

• [SLOW TEST:17.987 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:45:39.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 11:45:39.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:45:49.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-28n77" for this suite.
Jan  1 11:46:31.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:46:31.802: INFO: namespace: e2e-tests-pods-28n77, resource: bindings, ignored listing per whitelist
Jan  1 11:46:31.828: INFO: namespace e2e-tests-pods-28n77 deletion completed in 42.240421421s

• [SLOW TEST:52.644 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:46:31.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 11:46:32.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-6kq27" to be "success or failure"
Jan  1 11:46:32.227: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.676665ms
Jan  1 11:46:34.244: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024454985s
Jan  1 11:46:36.258: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038859477s
Jan  1 11:46:38.894: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.674814149s
Jan  1 11:46:40.917: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697343774s
Jan  1 11:46:42.976: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.756812059s
STEP: Saw pod success
Jan  1 11:46:42.977: INFO: Pod "downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:46:42.991: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 11:46:43.104: INFO: Waiting for pod downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:46:43.204: INFO: Pod downwardapi-volume-5af578d1-2c8c-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:46:43.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6kq27" for this suite.
Jan  1 11:46:49.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:46:49.407: INFO: namespace: e2e-tests-projected-6kq27, resource: bindings, ignored listing per whitelist
Jan  1 11:46:49.476: INFO: namespace e2e-tests-projected-6kq27 deletion completed in 6.262688946s

• [SLOW TEST:17.648 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:46:49.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0101 11:47:00.510001       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 11:47:00.510: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:47:00.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bz4rq" for this suite.
Jan  1 11:47:06.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:47:06.851: INFO: namespace: e2e-tests-gc-bz4rq, resource: bindings, ignored listing per whitelist
Jan  1 11:47:06.889: INFO: namespace e2e-tests-gc-bz4rq deletion completed in 6.355735107s

• [SLOW TEST:17.413 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:47:06.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  1 11:47:07.289: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:47:07.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7blvc" for this suite.
Jan  1 11:47:13.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:47:13.560: INFO: namespace: e2e-tests-kubectl-7blvc, resource: bindings, ignored listing per whitelist
Jan  1 11:47:13.697: INFO: namespace e2e-tests-kubectl-7blvc deletion completed in 6.24141936s

• [SLOW TEST:6.806 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:47:13.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  1 11:47:13.943: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-6mtrp" to be "success or failure"
Jan  1 11:47:14.069: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 125.332831ms
Jan  1 11:47:16.272: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328721075s
Jan  1 11:47:18.289: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345831355s
Jan  1 11:47:20.420: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476559389s
Jan  1 11:47:22.443: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499588414s
Jan  1 11:47:24.464: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.52005313s
Jan  1 11:47:26.488: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.544733542s
Jan  1 11:47:28.521: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.577050556s
STEP: Saw pod success
Jan  1 11:47:28.521: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  1 11:47:28.565: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  1 11:47:29.009: INFO: Waiting for pod pod-host-path-test to disappear
Jan  1 11:47:29.022: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:47:29.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-6mtrp" for this suite.
Jan  1 11:47:35.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:47:35.418: INFO: namespace: e2e-tests-hostpath-6mtrp, resource: bindings, ignored listing per whitelist
Jan  1 11:47:35.425: INFO: namespace e2e-tests-hostpath-6mtrp deletion completed in 6.353884031s

• [SLOW TEST:21.727 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:47:35.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 11:47:35.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-jqx7p" to be "success or failure"
Jan  1 11:47:35.868: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 114.977456ms
Jan  1 11:47:37.886: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133209696s
Jan  1 11:47:39.905: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152523902s
Jan  1 11:47:42.117: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364582044s
Jan  1 11:47:44.143: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.390179211s
Jan  1 11:47:46.392: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639618943s
STEP: Saw pod success
Jan  1 11:47:46.393: INFO: Pod "downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:47:46.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 11:47:47.016: INFO: Waiting for pod downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:47:47.042: INFO: Pod downwardapi-volume-80db59dc-2c8c-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:47:47.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jqx7p" for this suite.
Jan  1 11:47:53.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:47:53.249: INFO: namespace: e2e-tests-projected-jqx7p, resource: bindings, ignored listing per whitelist
Jan  1 11:47:53.251: INFO: namespace e2e-tests-projected-jqx7p deletion completed in 6.204091314s

• [SLOW TEST:17.826 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:47:53.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 11:47:53.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-nkk9l" to be "success or failure"
Jan  1 11:47:53.537: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.478462ms
Jan  1 11:47:55.553: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035139851s
Jan  1 11:47:57.572: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05371899s
Jan  1 11:47:59.584: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065647054s
Jan  1 11:48:01.600: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081954782s
Jan  1 11:48:03.817: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298914178s
STEP: Saw pod success
Jan  1 11:48:03.817: INFO: Pod "downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:48:03.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 11:48:03.915: INFO: Waiting for pod downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:48:04.271: INFO: Pod downwardapi-volume-8b75e4b8-2c8c-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:48:04.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nkk9l" for this suite.
Jan  1 11:48:10.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:48:10.743: INFO: namespace: e2e-tests-downward-api-nkk9l, resource: bindings, ignored listing per whitelist
Jan  1 11:48:10.803: INFO: namespace e2e-tests-downward-api-nkk9l deletion completed in 6.520284356s

• [SLOW TEST:17.552 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:48:10.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  1 11:48:10.979: INFO: Waiting up to 5m0s for pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-p4xz6" to be "success or failure"
Jan  1 11:48:11.000: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.759231ms
Jan  1 11:48:13.021: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040837538s
Jan  1 11:48:15.063: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083083485s
Jan  1 11:48:17.088: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108522459s
Jan  1 11:48:19.100: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12003796s
Jan  1 11:48:21.162: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181877758s
STEP: Saw pod success
Jan  1 11:48:21.162: INFO: Pod "pod-95d86237-2c8c-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:48:21.175: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-95d86237-2c8c-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 11:48:21.610: INFO: Waiting for pod pod-95d86237-2c8c-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:48:22.631: INFO: Pod pod-95d86237-2c8c-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:48:22.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p4xz6" for this suite.
Jan  1 11:48:28.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:48:28.957: INFO: namespace: e2e-tests-emptydir-p4xz6, resource: bindings, ignored listing per whitelist
Jan  1 11:48:28.975: INFO: namespace e2e-tests-emptydir-p4xz6 deletion completed in 6.334087583s

• [SLOW TEST:18.172 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:48:28.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  1 11:48:29.130: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 11:48:29.211: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 11:48:29.217: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  1 11:48:29.238: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  1 11:48:29.238: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 11:48:29.238: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 11:48:29.238: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  1 11:48:29.238: INFO: 	Container weave ready: true, restart count 0
Jan  1 11:48:29.238: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 11:48:29.238: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 11:48:29.238: INFO: 	Container coredns ready: true, restart count 0
Jan  1 11:48:29.238: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 11:48:29.238: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 11:48:29.238: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 11:48:29.238: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 11:48:29.238: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a5ab00f0-2c8c-11ea-8bf6-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-a5ab00f0-2c8c-11ea-8bf6-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5ab00f0-2c8c-11ea-8bf6-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:48:49.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-vcqc7" for this suite.
Jan  1 11:49:03.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:49:04.009: INFO: namespace: e2e-tests-sched-pred-vcqc7, resource: bindings, ignored listing per whitelist
Jan  1 11:49:04.126: INFO: namespace e2e-tests-sched-pred-vcqc7 deletion completed in 14.268731641s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:35.149 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:49:04.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-fnjcw in namespace e2e-tests-proxy-wn9hn
I0101 11:49:04.397826       8 runners.go:184] Created replication controller with name: proxy-service-fnjcw, namespace: e2e-tests-proxy-wn9hn, replica count: 1
I0101 11:49:05.449524       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:06.450227       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:07.450849       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:08.452055       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:09.452804       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:10.453819       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:11.454894       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:12.455673       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:13.456565       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:49:14.457237       8 runners.go:184] proxy-service-fnjcw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  1 11:49:14.482: INFO: setup took 10.207864758s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  1 11:49:14.539: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wn9hn/pods/http:proxy-service-fnjcw-dtgsd:160/proxy/: foo (200; 55.442864ms)
Jan  1 11:49:14.539: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wn9hn/pods/http:proxy-service-fnjcw-dtgsd:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  1 11:49:31.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-kchxr run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  1 11:49:44.071: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan  1 11:49:44.072: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:49:46.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kchxr" for this suite.
Jan  1 11:49:53.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:49:53.207: INFO: namespace: e2e-tests-kubectl-kchxr, resource: bindings, ignored listing per whitelist
Jan  1 11:49:53.263: INFO: namespace e2e-tests-kubectl-kchxr deletion completed in 6.590380007s

• [SLOW TEST:22.276 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:49:53.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-d30c87f5-2c8c-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:50:07.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cz9ql" for this suite.
Jan  1 11:50:31.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:50:32.056: INFO: namespace: e2e-tests-configmap-cz9ql, resource: bindings, ignored listing per whitelist
Jan  1 11:50:32.077: INFO: namespace e2e-tests-configmap-cz9ql deletion completed in 24.191628335s

• [SLOW TEST:38.812 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:50:32.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:50:42.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-pgvgr" for this suite.
Jan  1 11:51:24.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:51:24.620: INFO: namespace: e2e-tests-kubelet-test-pgvgr, resource: bindings, ignored listing per whitelist
Jan  1 11:51:24.655: INFO: namespace e2e-tests-kubelet-test-pgvgr deletion completed in 42.282803386s

• [SLOW TEST:52.578 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:51:24.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 11:51:24.826: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:51:39.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-klpd6" for this suite.
Jan  1 11:51:45.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:51:45.975: INFO: namespace: e2e-tests-init-container-klpd6, resource: bindings, ignored listing per whitelist
Jan  1 11:51:46.027: INFO: namespace e2e-tests-init-container-klpd6 deletion completed in 6.482723466s

• [SLOW TEST:21.371 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:51:46.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-htcmw/configmap-test-162b1f22-2c8d-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 11:51:46.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-htcmw" to be "success or failure"
Jan  1 11:51:46.335: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.182597ms
Jan  1 11:51:48.355: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115980789s
Jan  1 11:51:50.376: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136829601s
Jan  1 11:51:52.430: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191003155s
Jan  1 11:51:54.446: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20724053s
Jan  1 11:51:56.468: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228764016s
STEP: Saw pod success
Jan  1 11:51:56.468: INFO: Pod "pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:51:56.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005 container env-test: 
STEP: delete the pod
Jan  1 11:51:56.739: INFO: Waiting for pod pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:51:56.758: INFO: Pod pod-configmaps-162c4534-2c8d-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:51:56.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-htcmw" for this suite.
Jan  1 11:52:02.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:52:02.982: INFO: namespace: e2e-tests-configmap-htcmw, resource: bindings, ignored listing per whitelist
Jan  1 11:52:03.105: INFO: namespace e2e-tests-configmap-htcmw deletion completed in 6.333361516s

• [SLOW TEST:17.077 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:52:03.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005
Jan  1 11:52:03.352: INFO: Pod name my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005: Found 0 pods out of 1
Jan  1 11:52:08.617: INFO: Pod name my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005: Found 1 pods out of 1
Jan  1 11:52:08.617: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005" are running
Jan  1 11:52:13.200: INFO: Pod "my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005-2qw4t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 11:52:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 11:52:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 11:52:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 11:52:03 +0000 UTC Reason: Message:}])
Jan  1 11:52:13.201: INFO: Trying to dial the pod
Jan  1 11:52:18.315: INFO: Controller my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005-2qw4t]: "my-hostname-basic-205f778e-2c8d-11ea-8bf6-0242ac110005-2qw4t", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:52:18.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-pp7m8" for this suite.
Jan  1 11:52:26.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:52:26.594: INFO: namespace: e2e-tests-replication-controller-pp7m8, resource: bindings, ignored listing per whitelist
Jan  1 11:52:26.759: INFO: namespace e2e-tests-replication-controller-pp7m8 deletion completed in 8.428652634s

• [SLOW TEST:23.653 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:52:26.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2e6700e1-2c8d-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 11:52:26.966: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-l4hlr" to be "success or failure"
Jan  1 11:52:26.980: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.203778ms
Jan  1 11:52:28.997: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030868068s
Jan  1 11:52:31.016: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050079154s
Jan  1 11:52:33.039: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073324055s
Jan  1 11:52:35.230: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264470026s
Jan  1 11:52:37.250: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283805315s
Jan  1 11:52:39.269: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.302932278s
STEP: Saw pod success
Jan  1 11:52:39.269: INFO: Pod "pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:52:39.274: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 11:52:39.825: INFO: Waiting for pod pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:52:40.185: INFO: Pod pod-projected-secrets-2e72a9b4-2c8d-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:52:40.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l4hlr" for this suite.
Jan  1 11:52:46.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:52:46.601: INFO: namespace: e2e-tests-projected-l4hlr, resource: bindings, ignored listing per whitelist
Jan  1 11:52:46.601: INFO: namespace e2e-tests-projected-l4hlr deletion completed in 6.375176928s

• [SLOW TEST:19.842 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:52:46.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-98n64
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 11:52:46.816: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 11:53:23.194: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-98n64 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 11:53:23.194: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 11:53:24.149: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:53:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-98n64" for this suite.
Jan  1 11:53:52.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:53:52.355: INFO: namespace: e2e-tests-pod-network-test-98n64, resource: bindings, ignored listing per whitelist
Jan  1 11:53:52.361: INFO: namespace e2e-tests-pod-network-test-98n64 deletion completed in 28.192092762s

• [SLOW TEST:65.759 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:53:52.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 11:53:52.747: INFO: Number of nodes with available pods: 0
Jan  1 11:53:52.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:53.799: INFO: Number of nodes with available pods: 0
Jan  1 11:53:53.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:54.799: INFO: Number of nodes with available pods: 0
Jan  1 11:53:54.799: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:55.784: INFO: Number of nodes with available pods: 0
Jan  1 11:53:55.784: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:56.783: INFO: Number of nodes with available pods: 0
Jan  1 11:53:56.784: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:57.781: INFO: Number of nodes with available pods: 0
Jan  1 11:53:57.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:58.767: INFO: Number of nodes with available pods: 0
Jan  1 11:53:58.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:53:59.771: INFO: Number of nodes with available pods: 0
Jan  1 11:53:59.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:54:00.785: INFO: Number of nodes with available pods: 0
Jan  1 11:54:00.785: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:54:01.775: INFO: Number of nodes with available pods: 0
Jan  1 11:54:01.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 11:54:02.839: INFO: Number of nodes with available pods: 1
Jan  1 11:54:02.839: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  1 11:54:02.977: INFO: Number of nodes with available pods: 1
Jan  1 11:54:02.977: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-h5vpm, will wait for the garbage collector to delete the pods
Jan  1 11:54:04.062: INFO: Deleting DaemonSet.extensions daemon-set took: 11.969634ms
Jan  1 11:54:05.763: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.700763405s
Jan  1 11:54:10.177: INFO: Number of nodes with available pods: 0
Jan  1 11:54:10.177: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 11:54:10.261: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h5vpm/daemonsets","resourceVersion":"16790785"},"items":null}

Jan  1 11:54:10.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h5vpm/pods","resourceVersion":"16790785"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:54:10.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-h5vpm" for this suite.
Jan  1 11:54:18.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:54:18.557: INFO: namespace: e2e-tests-daemonsets-h5vpm, resource: bindings, ignored listing per whitelist
Jan  1 11:54:18.574: INFO: namespace e2e-tests-daemonsets-h5vpm deletion completed in 8.284404517s

• [SLOW TEST:26.213 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:54:18.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-mkxl6
I0101 11:54:18.951989       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-mkxl6, replica count: 1
I0101 11:54:20.003911       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:21.004669       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:22.005307       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:23.006165       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:24.007028       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:25.008925       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:26.009745       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:27.010496       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 11:54:28.011031       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  1 11:54:28.180: INFO: Created: latency-svc-gc494
Jan  1 11:54:28.223: INFO: Got endpoints: latency-svc-gc494 [111.675727ms]
Jan  1 11:54:28.324: INFO: Created: latency-svc-ks26h
Jan  1 11:54:28.348: INFO: Got endpoints: latency-svc-ks26h [123.81287ms]
Jan  1 11:54:28.417: INFO: Created: latency-svc-jcxd8
Jan  1 11:54:28.583: INFO: Created: latency-svc-vfqwr
Jan  1 11:54:28.587: INFO: Got endpoints: latency-svc-jcxd8 [362.279129ms]
Jan  1 11:54:28.598: INFO: Got endpoints: latency-svc-vfqwr [374.108739ms]
Jan  1 11:54:28.750: INFO: Created: latency-svc-bqj2c
Jan  1 11:54:28.817: INFO: Created: latency-svc-s4cxj
Jan  1 11:54:28.817: INFO: Got endpoints: latency-svc-bqj2c [593.577954ms]
Jan  1 11:54:28.923: INFO: Got endpoints: latency-svc-s4cxj [698.918661ms]
Jan  1 11:54:29.019: INFO: Created: latency-svc-2jhh8
Jan  1 11:54:29.120: INFO: Got endpoints: latency-svc-2jhh8 [895.526975ms]
Jan  1 11:54:29.206: INFO: Created: latency-svc-g6hgq
Jan  1 11:54:29.211: INFO: Got endpoints: latency-svc-g6hgq [986.09759ms]
Jan  1 11:54:29.328: INFO: Created: latency-svc-rmxl7
Jan  1 11:54:29.414: INFO: Created: latency-svc-zz2jg
Jan  1 11:54:29.414: INFO: Got endpoints: latency-svc-rmxl7 [1.188452242s]
Jan  1 11:54:29.594: INFO: Got endpoints: latency-svc-zz2jg [1.368555751s]
Jan  1 11:54:29.620: INFO: Created: latency-svc-nnslk
Jan  1 11:54:29.673: INFO: Got endpoints: latency-svc-nnslk [1.447979718s]
Jan  1 11:54:29.819: INFO: Created: latency-svc-2xt8v
Jan  1 11:54:29.856: INFO: Got endpoints: latency-svc-2xt8v [1.631341324s]
Jan  1 11:54:29.911: INFO: Created: latency-svc-dphqm
Jan  1 11:54:30.061: INFO: Got endpoints: latency-svc-dphqm [1.835448332s]
Jan  1 11:54:30.084: INFO: Created: latency-svc-6xhgz
Jan  1 11:54:30.098: INFO: Got endpoints: latency-svc-6xhgz [1.872520797s]
Jan  1 11:54:30.147: INFO: Created: latency-svc-894qx
Jan  1 11:54:30.255: INFO: Got endpoints: latency-svc-894qx [2.029808607s]
Jan  1 11:54:30.279: INFO: Created: latency-svc-vsjc2
Jan  1 11:54:30.293: INFO: Got endpoints: latency-svc-vsjc2 [2.069140464s]
Jan  1 11:54:30.332: INFO: Created: latency-svc-2m55m
Jan  1 11:54:30.475: INFO: Got endpoints: latency-svc-2m55m [2.125990887s]
Jan  1 11:54:30.496: INFO: Created: latency-svc-8g4fv
Jan  1 11:54:30.513: INFO: Got endpoints: latency-svc-8g4fv [1.926479899s]
Jan  1 11:54:30.718: INFO: Created: latency-svc-444sk
Jan  1 11:54:30.746: INFO: Got endpoints: latency-svc-444sk [2.147652285s]
Jan  1 11:54:30.979: INFO: Created: latency-svc-8sw6j
Jan  1 11:54:31.012: INFO: Got endpoints: latency-svc-8sw6j [2.194024961s]
Jan  1 11:54:31.211: INFO: Created: latency-svc-7m2lp
Jan  1 11:54:31.232: INFO: Got endpoints: latency-svc-7m2lp [2.309346601s]
Jan  1 11:54:31.272: INFO: Created: latency-svc-5nmmw
Jan  1 11:54:31.286: INFO: Got endpoints: latency-svc-5nmmw [2.165606481s]
Jan  1 11:54:31.519: INFO: Created: latency-svc-59dwq
Jan  1 11:54:31.519: INFO: Got endpoints: latency-svc-59dwq [2.308137802s]
Jan  1 11:54:32.046: INFO: Created: latency-svc-xv8mq
Jan  1 11:54:32.055: INFO: Got endpoints: latency-svc-xv8mq [2.641089654s]
Jan  1 11:54:32.276: INFO: Created: latency-svc-94qgw
Jan  1 11:54:32.470: INFO: Got endpoints: latency-svc-94qgw [2.876201344s]
Jan  1 11:54:32.513: INFO: Created: latency-svc-scxdz
Jan  1 11:54:32.518: INFO: Got endpoints: latency-svc-scxdz [2.844199268s]
Jan  1 11:54:32.685: INFO: Created: latency-svc-h2hb6
Jan  1 11:54:32.693: INFO: Got endpoints: latency-svc-h2hb6 [2.836137135s]
Jan  1 11:54:32.735: INFO: Created: latency-svc-sk8vz
Jan  1 11:54:32.748: INFO: Got endpoints: latency-svc-sk8vz [2.686315384s]
Jan  1 11:54:32.897: INFO: Created: latency-svc-l6jj2
Jan  1 11:54:32.913: INFO: Got endpoints: latency-svc-l6jj2 [2.815286378s]
Jan  1 11:54:33.164: INFO: Created: latency-svc-zml2l
Jan  1 11:54:33.189: INFO: Got endpoints: latency-svc-zml2l [2.933704559s]
Jan  1 11:54:33.497: INFO: Created: latency-svc-p8d29
Jan  1 11:54:33.498: INFO: Got endpoints: latency-svc-p8d29 [3.204164292s]
Jan  1 11:54:33.637: INFO: Created: latency-svc-n5dv4
Jan  1 11:54:33.653: INFO: Got endpoints: latency-svc-n5dv4 [3.178225818s]
Jan  1 11:54:33.848: INFO: Created: latency-svc-544pv
Jan  1 11:54:33.931: INFO: Got endpoints: latency-svc-544pv [3.417542555s]
Jan  1 11:54:33.965: INFO: Created: latency-svc-l26b4
Jan  1 11:54:33.966: INFO: Got endpoints: latency-svc-l26b4 [3.219012946s]
Jan  1 11:54:34.166: INFO: Created: latency-svc-nj5mt
Jan  1 11:54:34.181: INFO: Got endpoints: latency-svc-nj5mt [3.169106559s]
Jan  1 11:54:34.328: INFO: Created: latency-svc-sxvkt
Jan  1 11:54:34.353: INFO: Got endpoints: latency-svc-sxvkt [3.119986733s]
Jan  1 11:54:34.549: INFO: Created: latency-svc-bh6nz
Jan  1 11:54:34.585: INFO: Got endpoints: latency-svc-bh6nz [3.298140677s]
Jan  1 11:54:34.838: INFO: Created: latency-svc-n46vt
Jan  1 11:54:34.839: INFO: Got endpoints: latency-svc-n46vt [3.319012278s]
Jan  1 11:54:35.145: INFO: Created: latency-svc-4gwx6
Jan  1 11:54:35.327: INFO: Created: latency-svc-js826
Jan  1 11:54:35.333: INFO: Got endpoints: latency-svc-4gwx6 [3.277865479s]
Jan  1 11:54:35.347: INFO: Got endpoints: latency-svc-js826 [2.876358891s]
Jan  1 11:54:35.502: INFO: Created: latency-svc-7hffl
Jan  1 11:54:35.512: INFO: Got endpoints: latency-svc-7hffl [2.994388289s]
Jan  1 11:54:35.569: INFO: Created: latency-svc-tpxtb
Jan  1 11:54:35.579: INFO: Got endpoints: latency-svc-tpxtb [2.886389916s]
Jan  1 11:54:35.795: INFO: Created: latency-svc-8mlt4
Jan  1 11:54:35.821: INFO: Got endpoints: latency-svc-8mlt4 [3.073010343s]
Jan  1 11:54:35.852: INFO: Created: latency-svc-rcnjl
Jan  1 11:54:36.017: INFO: Got endpoints: latency-svc-rcnjl [3.103261746s]
Jan  1 11:54:36.029: INFO: Created: latency-svc-cdv96
Jan  1 11:54:36.048: INFO: Got endpoints: latency-svc-cdv96 [2.858264137s]
Jan  1 11:54:36.095: INFO: Created: latency-svc-bm68s
Jan  1 11:54:36.226: INFO: Got endpoints: latency-svc-bm68s [208.464032ms]
Jan  1 11:54:36.252: INFO: Created: latency-svc-gbkbk
Jan  1 11:54:36.287: INFO: Got endpoints: latency-svc-gbkbk [2.788829306s]
Jan  1 11:54:36.324: INFO: Created: latency-svc-ftnmd
Jan  1 11:54:36.509: INFO: Got endpoints: latency-svc-ftnmd [2.855388918s]
Jan  1 11:54:36.579: INFO: Created: latency-svc-vljfn
Jan  1 11:54:36.590: INFO: Got endpoints: latency-svc-vljfn [2.658788897s]
Jan  1 11:54:36.744: INFO: Created: latency-svc-m7mw7
Jan  1 11:54:36.779: INFO: Got endpoints: latency-svc-m7mw7 [2.813077183s]
Jan  1 11:54:36.973: INFO: Created: latency-svc-mms2x
Jan  1 11:54:36.973: INFO: Created: latency-svc-c7t2z
Jan  1 11:54:36.986: INFO: Got endpoints: latency-svc-mms2x [2.80439638s]
Jan  1 11:54:36.994: INFO: Got endpoints: latency-svc-c7t2z [2.640666167s]
Jan  1 11:54:37.273: INFO: Created: latency-svc-5prfw
Jan  1 11:54:37.273: INFO: Got endpoints: latency-svc-5prfw [2.688220908s]
Jan  1 11:54:37.421: INFO: Created: latency-svc-kdstw
Jan  1 11:54:37.452: INFO: Got endpoints: latency-svc-kdstw [2.612724949s]
Jan  1 11:54:37.498: INFO: Created: latency-svc-h2cl9
Jan  1 11:54:37.666: INFO: Got endpoints: latency-svc-h2cl9 [2.331929679s]
Jan  1 11:54:37.694: INFO: Created: latency-svc-v7kj9
Jan  1 11:54:37.719: INFO: Got endpoints: latency-svc-v7kj9 [2.371435463s]
Jan  1 11:54:37.886: INFO: Created: latency-svc-zxwb5
Jan  1 11:54:37.904: INFO: Got endpoints: latency-svc-zxwb5 [2.39138514s]
Jan  1 11:54:38.056: INFO: Created: latency-svc-xm4pc
Jan  1 11:54:38.128: INFO: Got endpoints: latency-svc-xm4pc [2.548789705s]
Jan  1 11:54:38.407: INFO: Created: latency-svc-nf65f
Jan  1 11:54:38.838: INFO: Got endpoints: latency-svc-nf65f [3.016636292s]
Jan  1 11:54:38.841: INFO: Created: latency-svc-79jjr
Jan  1 11:54:38.865: INFO: Got endpoints: latency-svc-79jjr [2.817513313s]
Jan  1 11:54:38.920: INFO: Created: latency-svc-xw8gx
Jan  1 11:54:39.029: INFO: Got endpoints: latency-svc-xw8gx [2.802793712s]
Jan  1 11:54:39.049: INFO: Created: latency-svc-nn52t
Jan  1 11:54:39.071: INFO: Got endpoints: latency-svc-nn52t [2.784056399s]
Jan  1 11:54:39.211: INFO: Created: latency-svc-pzzk4
Jan  1 11:54:39.211: INFO: Got endpoints: latency-svc-pzzk4 [2.702076441s]
Jan  1 11:54:39.434: INFO: Created: latency-svc-46qdp
Jan  1 11:54:39.467: INFO: Got endpoints: latency-svc-46qdp [2.876263649s]
Jan  1 11:54:39.517: INFO: Created: latency-svc-g7ttk
Jan  1 11:54:39.640: INFO: Got endpoints: latency-svc-g7ttk [2.860544628s]
Jan  1 11:54:39.670: INFO: Created: latency-svc-jf4wh
Jan  1 11:54:39.676: INFO: Got endpoints: latency-svc-jf4wh [2.689937351s]
Jan  1 11:54:39.747: INFO: Created: latency-svc-7tcqx
Jan  1 11:54:39.840: INFO: Got endpoints: latency-svc-7tcqx [2.845955834s]
Jan  1 11:54:39.884: INFO: Created: latency-svc-jmnnx
Jan  1 11:54:39.916: INFO: Got endpoints: latency-svc-jmnnx [2.642987423s]
Jan  1 11:54:40.061: INFO: Created: latency-svc-655j8
Jan  1 11:54:40.100: INFO: Got endpoints: latency-svc-655j8 [2.647799456s]
Jan  1 11:54:40.145: INFO: Created: latency-svc-dbtzc
Jan  1 11:54:40.294: INFO: Got endpoints: latency-svc-dbtzc [2.627501288s]
Jan  1 11:54:40.335: INFO: Created: latency-svc-zh8j7
Jan  1 11:54:40.336: INFO: Got endpoints: latency-svc-zh8j7 [2.616414091s]
Jan  1 11:54:40.559: INFO: Created: latency-svc-g4qxq
Jan  1 11:54:40.585: INFO: Got endpoints: latency-svc-g4qxq [2.680999996s]
Jan  1 11:54:40.918: INFO: Created: latency-svc-n79mh
Jan  1 11:54:40.940: INFO: Got endpoints: latency-svc-n79mh [2.81141565s]
Jan  1 11:54:40.985: INFO: Created: latency-svc-kn4vk
Jan  1 11:54:41.131: INFO: Got endpoints: latency-svc-kn4vk [2.292230202s]
Jan  1 11:54:41.151: INFO: Created: latency-svc-4qf9g
Jan  1 11:54:41.186: INFO: Got endpoints: latency-svc-4qf9g [2.320300221s]
Jan  1 11:54:41.301: INFO: Created: latency-svc-vrm6j
Jan  1 11:54:41.339: INFO: Got endpoints: latency-svc-vrm6j [2.310034644s]
Jan  1 11:54:41.533: INFO: Created: latency-svc-9pswm
Jan  1 11:54:41.547: INFO: Got endpoints: latency-svc-9pswm [2.475540072s]
Jan  1 11:54:41.603: INFO: Created: latency-svc-gjvvl
Jan  1 11:54:41.758: INFO: Got endpoints: latency-svc-gjvvl [2.546469736s]
Jan  1 11:54:41.788: INFO: Created: latency-svc-mzr62
Jan  1 11:54:41.820: INFO: Got endpoints: latency-svc-mzr62 [2.352989959s]
Jan  1 11:54:42.050: INFO: Created: latency-svc-jfnl5
Jan  1 11:54:42.054: INFO: Got endpoints: latency-svc-jfnl5 [2.413824404s]
Jan  1 11:54:42.287: INFO: Created: latency-svc-5shp2
Jan  1 11:54:42.314: INFO: Got endpoints: latency-svc-5shp2 [2.638042747s]
Jan  1 11:54:42.370: INFO: Created: latency-svc-6qqhb
Jan  1 11:54:42.514: INFO: Got endpoints: latency-svc-6qqhb [2.674297843s]
Jan  1 11:54:42.589: INFO: Created: latency-svc-hlbn9
Jan  1 11:54:42.590: INFO: Got endpoints: latency-svc-hlbn9 [2.673600805s]
Jan  1 11:54:42.848: INFO: Created: latency-svc-cd6bm
Jan  1 11:54:42.889: INFO: Created: latency-svc-hzpjj
Jan  1 11:54:42.922: INFO: Got endpoints: latency-svc-cd6bm [2.821721608s]
Jan  1 11:54:43.028: INFO: Got endpoints: latency-svc-hzpjj [2.73343407s]
Jan  1 11:54:43.110: INFO: Created: latency-svc-k9d8b
Jan  1 11:54:43.221: INFO: Got endpoints: latency-svc-k9d8b [2.885182178s]
Jan  1 11:54:43.253: INFO: Created: latency-svc-4kjt7
Jan  1 11:54:43.287: INFO: Got endpoints: latency-svc-4kjt7 [2.701305495s]
Jan  1 11:54:43.545: INFO: Created: latency-svc-lkw9q
Jan  1 11:54:43.545: INFO: Got endpoints: latency-svc-lkw9q [2.604769849s]
Jan  1 11:54:43.568: INFO: Created: latency-svc-ls9pp
Jan  1 11:54:43.582: INFO: Got endpoints: latency-svc-ls9pp [2.450500791s]
Jan  1 11:54:43.800: INFO: Created: latency-svc-nqd4f
Jan  1 11:54:43.898: INFO: Got endpoints: latency-svc-nqd4f [2.712019574s]
Jan  1 11:54:43.914: INFO: Created: latency-svc-2gs58
Jan  1 11:54:43.932: INFO: Got endpoints: latency-svc-2gs58 [2.592051132s]
Jan  1 11:54:44.104: INFO: Created: latency-svc-96cr4
Jan  1 11:54:44.120: INFO: Got endpoints: latency-svc-96cr4 [2.572636592s]
Jan  1 11:54:44.185: INFO: Created: latency-svc-t9cf9
Jan  1 11:54:44.288: INFO: Got endpoints: latency-svc-t9cf9 [2.529066634s]
Jan  1 11:54:44.386: INFO: Created: latency-svc-qx2r8
Jan  1 11:54:44.515: INFO: Got endpoints: latency-svc-qx2r8 [2.69401804s]
Jan  1 11:54:44.545: INFO: Created: latency-svc-vk9mr
Jan  1 11:54:44.568: INFO: Got endpoints: latency-svc-vk9mr [2.51357067s]
Jan  1 11:54:44.691: INFO: Created: latency-svc-x8bh4
Jan  1 11:54:44.724: INFO: Got endpoints: latency-svc-x8bh4 [2.40934043s]
Jan  1 11:54:44.892: INFO: Created: latency-svc-plpqc
Jan  1 11:54:44.917: INFO: Got endpoints: latency-svc-plpqc [2.401411023s]
Jan  1 11:54:44.979: INFO: Created: latency-svc-qh4s2
Jan  1 11:54:45.101: INFO: Got endpoints: latency-svc-qh4s2 [2.509808418s]
Jan  1 11:54:45.136: INFO: Created: latency-svc-gghqn
Jan  1 11:54:45.157: INFO: Got endpoints: latency-svc-gghqn [2.234320419s]
Jan  1 11:54:45.290: INFO: Created: latency-svc-bxfhf
Jan  1 11:54:45.310: INFO: Got endpoints: latency-svc-bxfhf [2.281683165s]
Jan  1 11:54:45.399: INFO: Created: latency-svc-xpsxr
Jan  1 11:54:45.462: INFO: Got endpoints: latency-svc-xpsxr [2.240480703s]
Jan  1 11:54:45.486: INFO: Created: latency-svc-lnkhd
Jan  1 11:54:45.521: INFO: Got endpoints: latency-svc-lnkhd [2.233087126s]
Jan  1 11:54:45.721: INFO: Created: latency-svc-c2cxb
Jan  1 11:54:45.817: INFO: Got endpoints: latency-svc-c2cxb [2.271900751s]
Jan  1 11:54:46.091: INFO: Created: latency-svc-gjvnz
Jan  1 11:54:46.306: INFO: Got endpoints: latency-svc-gjvnz [2.724308113s]
Jan  1 11:54:46.412: INFO: Created: latency-svc-h5m92
Jan  1 11:54:46.682: INFO: Got endpoints: latency-svc-h5m92 [2.782758878s]
Jan  1 11:54:46.726: INFO: Created: latency-svc-wt766
Jan  1 11:54:46.733: INFO: Got endpoints: latency-svc-wt766 [2.801076992s]
Jan  1 11:54:46.908: INFO: Created: latency-svc-wsmsk
Jan  1 11:54:46.934: INFO: Got endpoints: latency-svc-wsmsk [2.813840325s]
Jan  1 11:54:46.981: INFO: Created: latency-svc-zpkpt
Jan  1 11:54:47.089: INFO: Got endpoints: latency-svc-zpkpt [2.800637127s]
Jan  1 11:54:47.182: INFO: Created: latency-svc-6ffts
Jan  1 11:54:47.302: INFO: Created: latency-svc-kvwsl
Jan  1 11:54:47.320: INFO: Got endpoints: latency-svc-6ffts [2.805058616s]
Jan  1 11:54:47.325: INFO: Got endpoints: latency-svc-kvwsl [2.756427507s]
Jan  1 11:54:47.391: INFO: Created: latency-svc-4lrrb
Jan  1 11:54:47.489: INFO: Got endpoints: latency-svc-4lrrb [2.764933146s]
Jan  1 11:54:47.556: INFO: Created: latency-svc-7x948
Jan  1 11:54:47.564: INFO: Got endpoints: latency-svc-7x948 [2.646880145s]
Jan  1 11:54:47.839: INFO: Created: latency-svc-pd4dr
Jan  1 11:54:47.861: INFO: Got endpoints: latency-svc-pd4dr [2.75915714s]
Jan  1 11:54:48.058: INFO: Created: latency-svc-7vl6s
Jan  1 11:54:48.094: INFO: Got endpoints: latency-svc-7vl6s [2.937065626s]
Jan  1 11:54:48.320: INFO: Created: latency-svc-9qjwv
Jan  1 11:54:48.350: INFO: Got endpoints: latency-svc-9qjwv [3.039735123s]
Jan  1 11:54:48.380: INFO: Created: latency-svc-9rbft
Jan  1 11:54:48.476: INFO: Got endpoints: latency-svc-9rbft [3.013845797s]
Jan  1 11:54:48.532: INFO: Created: latency-svc-t7dd8
Jan  1 11:54:48.738: INFO: Got endpoints: latency-svc-t7dd8 [3.217012845s]
Jan  1 11:54:48.755: INFO: Created: latency-svc-r28wk
Jan  1 11:54:48.760: INFO: Got endpoints: latency-svc-r28wk [2.942056437s]
Jan  1 11:54:48.805: INFO: Created: latency-svc-bvwg5
Jan  1 11:54:48.825: INFO: Got endpoints: latency-svc-bvwg5 [2.518674234s]
Jan  1 11:54:48.964: INFO: Created: latency-svc-shzfq
Jan  1 11:54:48.976: INFO: Got endpoints: latency-svc-shzfq [2.293619868s]
Jan  1 11:54:49.144: INFO: Created: latency-svc-g92nn
Jan  1 11:54:49.162: INFO: Got endpoints: latency-svc-g92nn [2.42835279s]
Jan  1 11:54:49.207: INFO: Created: latency-svc-mxtrs
Jan  1 11:54:49.226: INFO: Got endpoints: latency-svc-mxtrs [2.292041572s]
Jan  1 11:54:49.321: INFO: Created: latency-svc-mw98n
Jan  1 11:54:49.343: INFO: Got endpoints: latency-svc-mw98n [2.253894392s]
Jan  1 11:54:49.420: INFO: Created: latency-svc-58n8b
Jan  1 11:54:49.522: INFO: Got endpoints: latency-svc-58n8b [2.20131346s]
Jan  1 11:54:49.580: INFO: Created: latency-svc-jpql2
Jan  1 11:54:49.612: INFO: Got endpoints: latency-svc-jpql2 [2.287131443s]
Jan  1 11:54:49.759: INFO: Created: latency-svc-95h75
Jan  1 11:54:49.779: INFO: Got endpoints: latency-svc-95h75 [2.289821385s]
Jan  1 11:54:49.794: INFO: Created: latency-svc-8sznz
Jan  1 11:54:49.952: INFO: Got endpoints: latency-svc-8sznz [2.388324964s]
Jan  1 11:54:49.983: INFO: Created: latency-svc-6nhgc
Jan  1 11:54:50.019: INFO: Got endpoints: latency-svc-6nhgc [2.157861107s]
Jan  1 11:54:50.121: INFO: Created: latency-svc-7wbm5
Jan  1 11:54:50.141: INFO: Got endpoints: latency-svc-7wbm5 [2.046917364s]
Jan  1 11:54:50.199: INFO: Created: latency-svc-275fj
Jan  1 11:54:50.323: INFO: Got endpoints: latency-svc-275fj [1.972570339s]
Jan  1 11:54:50.360: INFO: Created: latency-svc-mdhq9
Jan  1 11:54:50.403: INFO: Got endpoints: latency-svc-mdhq9 [1.926120151s]
Jan  1 11:54:50.547: INFO: Created: latency-svc-vpm7g
Jan  1 11:54:50.575: INFO: Got endpoints: latency-svc-vpm7g [1.836697002s]
Jan  1 11:54:50.745: INFO: Created: latency-svc-nsm6k
Jan  1 11:54:50.756: INFO: Got endpoints: latency-svc-nsm6k [1.996247266s]
Jan  1 11:54:50.914: INFO: Created: latency-svc-t5kr4
Jan  1 11:54:50.959: INFO: Got endpoints: latency-svc-t5kr4 [2.133348745s]
Jan  1 11:54:50.967: INFO: Created: latency-svc-4tlm8
Jan  1 11:54:50.976: INFO: Got endpoints: latency-svc-4tlm8 [2.000132891s]
Jan  1 11:54:51.236: INFO: Created: latency-svc-rwwr7
Jan  1 11:54:51.261: INFO: Got endpoints: latency-svc-rwwr7 [2.099740961s]
Jan  1 11:54:51.302: INFO: Created: latency-svc-zlhkx
Jan  1 11:54:51.310: INFO: Got endpoints: latency-svc-zlhkx [2.083241522s]
Jan  1 11:54:51.462: INFO: Created: latency-svc-d6bxp
Jan  1 11:54:51.468: INFO: Got endpoints: latency-svc-d6bxp [2.124053913s]
Jan  1 11:54:51.523: INFO: Created: latency-svc-nl2xb
Jan  1 11:54:51.697: INFO: Got endpoints: latency-svc-nl2xb [2.174454475s]
Jan  1 11:54:51.733: INFO: Created: latency-svc-vw859
Jan  1 11:54:51.918: INFO: Got endpoints: latency-svc-vw859 [2.305588464s]
Jan  1 11:54:51.943: INFO: Created: latency-svc-qfmwb
Jan  1 11:54:51.959: INFO: Got endpoints: latency-svc-qfmwb [2.179306765s]
Jan  1 11:54:52.176: INFO: Created: latency-svc-n7psr
Jan  1 11:54:52.177: INFO: Got endpoints: latency-svc-n7psr [2.223831824s]
Jan  1 11:54:52.213: INFO: Created: latency-svc-s7857
Jan  1 11:54:52.230: INFO: Got endpoints: latency-svc-s7857 [2.211135888s]
Jan  1 11:54:52.333: INFO: Created: latency-svc-z6hzk
Jan  1 11:54:52.351: INFO: Got endpoints: latency-svc-z6hzk [2.209438752s]
Jan  1 11:54:52.409: INFO: Created: latency-svc-bvpdq
Jan  1 11:54:52.536: INFO: Got endpoints: latency-svc-bvpdq [2.213251663s]
Jan  1 11:54:52.617: INFO: Created: latency-svc-xctrv
Jan  1 11:54:52.746: INFO: Got endpoints: latency-svc-xctrv [2.343022118s]
Jan  1 11:54:52.761: INFO: Created: latency-svc-fdhws
Jan  1 11:54:52.780: INFO: Got endpoints: latency-svc-fdhws [2.204892451s]
Jan  1 11:54:52.951: INFO: Created: latency-svc-blv54
Jan  1 11:54:52.963: INFO: Got endpoints: latency-svc-blv54 [2.206610246s]
Jan  1 11:54:53.034: INFO: Created: latency-svc-8lh9g
Jan  1 11:54:53.166: INFO: Got endpoints: latency-svc-8lh9g [2.206608781s]
Jan  1 11:54:53.199: INFO: Created: latency-svc-hsxvk
Jan  1 11:54:53.211: INFO: Got endpoints: latency-svc-hsxvk [2.234871031s]
Jan  1 11:54:53.441: INFO: Created: latency-svc-gbjzz
Jan  1 11:54:53.444: INFO: Got endpoints: latency-svc-gbjzz [2.182460175s]
Jan  1 11:54:53.636: INFO: Created: latency-svc-rnqkc
Jan  1 11:54:53.660: INFO: Got endpoints: latency-svc-rnqkc [2.350148294s]
Jan  1 11:54:53.918: INFO: Created: latency-svc-fmzcf
Jan  1 11:54:53.929: INFO: Got endpoints: latency-svc-fmzcf [2.461021382s]
Jan  1 11:54:54.091: INFO: Created: latency-svc-9d858
Jan  1 11:54:54.124: INFO: Got endpoints: latency-svc-9d858 [2.426941736s]
Jan  1 11:54:54.289: INFO: Created: latency-svc-wshnm
Jan  1 11:54:54.363: INFO: Got endpoints: latency-svc-wshnm [2.444481258s]
Jan  1 11:54:54.469: INFO: Created: latency-svc-8bf4d
Jan  1 11:54:54.490: INFO: Got endpoints: latency-svc-8bf4d [2.530736005s]
Jan  1 11:54:54.706: INFO: Created: latency-svc-r2npm
Jan  1 11:54:54.724: INFO: Got endpoints: latency-svc-r2npm [2.547220996s]
Jan  1 11:54:54.753: INFO: Created: latency-svc-qhcxb
Jan  1 11:54:54.775: INFO: Got endpoints: latency-svc-qhcxb [2.544872673s]
Jan  1 11:54:54.905: INFO: Created: latency-svc-swlr8
Jan  1 11:54:54.954: INFO: Got endpoints: latency-svc-swlr8 [2.602542179s]
Jan  1 11:54:54.977: INFO: Created: latency-svc-tkp9c
Jan  1 11:54:55.140: INFO: Got endpoints: latency-svc-tkp9c [2.602630847s]
Jan  1 11:54:55.196: INFO: Created: latency-svc-pldsf
Jan  1 11:54:55.215: INFO: Got endpoints: latency-svc-pldsf [2.468422414s]
Jan  1 11:54:55.231: INFO: Created: latency-svc-h76s7
Jan  1 11:54:55.236: INFO: Got endpoints: latency-svc-h76s7 [2.455193062s]
Jan  1 11:54:55.387: INFO: Created: latency-svc-7224k
Jan  1 11:54:55.411: INFO: Got endpoints: latency-svc-7224k [2.447398665s]
Jan  1 11:54:55.578: INFO: Created: latency-svc-7vdct
Jan  1 11:54:55.587: INFO: Got endpoints: latency-svc-7vdct [2.420712554s]
Jan  1 11:54:55.771: INFO: Created: latency-svc-wfmhs
Jan  1 11:54:55.778: INFO: Got endpoints: latency-svc-wfmhs [2.5666742s]
Jan  1 11:54:56.030: INFO: Created: latency-svc-tdg9d
Jan  1 11:54:56.060: INFO: Got endpoints: latency-svc-tdg9d [2.615894283s]
Jan  1 11:54:56.257: INFO: Created: latency-svc-lz2ms
Jan  1 11:54:56.257: INFO: Got endpoints: latency-svc-lz2ms [2.596246072s]
Jan  1 11:54:56.564: INFO: Created: latency-svc-fhlsj
Jan  1 11:54:56.580: INFO: Got endpoints: latency-svc-fhlsj [2.650829494s]
Jan  1 11:54:57.657: INFO: Created: latency-svc-pbr8l
Jan  1 11:54:57.657: INFO: Got endpoints: latency-svc-pbr8l [3.532549522s]
Jan  1 11:54:58.301: INFO: Created: latency-svc-dsqmn
Jan  1 11:54:58.531: INFO: Got endpoints: latency-svc-dsqmn [4.16769148s]
Jan  1 11:54:58.564: INFO: Created: latency-svc-z6mrq
Jan  1 11:54:58.844: INFO: Got endpoints: latency-svc-z6mrq [4.353915606s]
Jan  1 11:54:58.876: INFO: Created: latency-svc-pj4kl
Jan  1 11:54:58.900: INFO: Got endpoints: latency-svc-pj4kl [4.17557761s]
Jan  1 11:54:59.055: INFO: Created: latency-svc-66q8r
Jan  1 11:54:59.070: INFO: Got endpoints: latency-svc-66q8r [4.294466737s]
Jan  1 11:54:59.291: INFO: Created: latency-svc-rxkvs
Jan  1 11:54:59.333: INFO: Got endpoints: latency-svc-rxkvs [4.379101837s]
Jan  1 11:54:59.463: INFO: Created: latency-svc-sk5hl
Jan  1 11:54:59.481: INFO: Got endpoints: latency-svc-sk5hl [4.340929193s]
Jan  1 11:54:59.537: INFO: Created: latency-svc-76zmb
Jan  1 11:54:59.681: INFO: Got endpoints: latency-svc-76zmb [4.465364149s]
Jan  1 11:54:59.706: INFO: Created: latency-svc-qlc9r
Jan  1 11:54:59.725: INFO: Got endpoints: latency-svc-qlc9r [4.489427544s]
Jan  1 11:54:59.920: INFO: Created: latency-svc-fsp8c
Jan  1 11:54:59.936: INFO: Got endpoints: latency-svc-fsp8c [4.524574461s]
Jan  1 11:55:00.127: INFO: Created: latency-svc-zwhgs
Jan  1 11:55:00.136: INFO: Got endpoints: latency-svc-zwhgs [4.54930795s]
Jan  1 11:55:00.191: INFO: Created: latency-svc-59hw9
Jan  1 11:55:00.305: INFO: Got endpoints: latency-svc-59hw9 [4.527259428s]
Jan  1 11:55:00.326: INFO: Created: latency-svc-dmdw8
Jan  1 11:55:00.340: INFO: Got endpoints: latency-svc-dmdw8 [4.278952236s]
Jan  1 11:55:00.507: INFO: Created: latency-svc-vlxmj
Jan  1 11:55:00.537: INFO: Got endpoints: latency-svc-vlxmj [4.280161231s]
Jan  1 11:55:00.718: INFO: Created: latency-svc-wqd6m
Jan  1 11:55:00.750: INFO: Got endpoints: latency-svc-wqd6m [4.169716192s]
Jan  1 11:55:00.909: INFO: Created: latency-svc-n8956
Jan  1 11:55:00.928: INFO: Got endpoints: latency-svc-n8956 [3.270315655s]
Jan  1 11:55:00.979: INFO: Created: latency-svc-wkz9c
Jan  1 11:55:00.987: INFO: Got endpoints: latency-svc-wkz9c [2.455529251s]
Jan  1 11:55:01.129: INFO: Created: latency-svc-fzctz
Jan  1 11:55:01.153: INFO: Got endpoints: latency-svc-fzctz [2.308204522s]
Jan  1 11:55:01.225: INFO: Created: latency-svc-shqnl
Jan  1 11:55:01.357: INFO: Got endpoints: latency-svc-shqnl [2.457546864s]
Jan  1 11:55:01.405: INFO: Created: latency-svc-4cnw2
Jan  1 11:55:01.599: INFO: Got endpoints: latency-svc-4cnw2 [2.529082623s]
Jan  1 11:55:01.661: INFO: Created: latency-svc-jwsns
Jan  1 11:55:01.672: INFO: Got endpoints: latency-svc-jwsns [2.338065379s]
Jan  1 11:55:01.838: INFO: Created: latency-svc-56dz7
Jan  1 11:55:01.927: INFO: Got endpoints: latency-svc-56dz7 [2.445246472s]
Jan  1 11:55:02.083: INFO: Created: latency-svc-bf562
Jan  1 11:55:02.127: INFO: Got endpoints: latency-svc-bf562 [2.446192894s]
Jan  1 11:55:02.270: INFO: Created: latency-svc-z7ds7
Jan  1 11:55:02.291: INFO: Got endpoints: latency-svc-z7ds7 [2.565582923s]
Jan  1 11:55:02.343: INFO: Created: latency-svc-jrlgr
Jan  1 11:55:02.463: INFO: Got endpoints: latency-svc-jrlgr [2.526659266s]
Jan  1 11:55:02.529: INFO: Created: latency-svc-x27k2
Jan  1 11:55:02.554: INFO: Got endpoints: latency-svc-x27k2 [2.417306421s]
Jan  1 11:55:02.795: INFO: Created: latency-svc-qqmmf
Jan  1 11:55:02.796: INFO: Got endpoints: latency-svc-qqmmf [2.489884213s]
Jan  1 11:55:02.842: INFO: Created: latency-svc-tmjsc
Jan  1 11:55:02.948: INFO: Got endpoints: latency-svc-tmjsc [2.608188635s]
Jan  1 11:55:03.043: INFO: Created: latency-svc-cfxrs
Jan  1 11:55:03.217: INFO: Got endpoints: latency-svc-cfxrs [2.679780232s]
Jan  1 11:55:03.242: INFO: Created: latency-svc-6qfbm
Jan  1 11:55:03.270: INFO: Got endpoints: latency-svc-6qfbm [2.519178911s]
Jan  1 11:55:03.299: INFO: Created: latency-svc-l44cd
Jan  1 11:55:03.400: INFO: Got endpoints: latency-svc-l44cd [2.472438287s]
Jan  1 11:55:03.469: INFO: Created: latency-svc-mbm5z
Jan  1 11:55:03.497: INFO: Got endpoints: latency-svc-mbm5z [2.508855459s]
Jan  1 11:55:03.702: INFO: Created: latency-svc-sl2dk
Jan  1 11:55:03.708: INFO: Got endpoints: latency-svc-sl2dk [2.554778548s]
Jan  1 11:55:03.708: INFO: Latencies: [123.81287ms 208.464032ms 362.279129ms 374.108739ms 593.577954ms 698.918661ms 895.526975ms 986.09759ms 1.188452242s 1.368555751s 1.447979718s 1.631341324s 1.835448332s 1.836697002s 1.872520797s 1.926120151s 1.926479899s 1.972570339s 1.996247266s 2.000132891s 2.029808607s 2.046917364s 2.069140464s 2.083241522s 2.099740961s 2.124053913s 2.125990887s 2.133348745s 2.147652285s 2.157861107s 2.165606481s 2.174454475s 2.179306765s 2.182460175s 2.194024961s 2.20131346s 2.204892451s 2.206608781s 2.206610246s 2.209438752s 2.211135888s 2.213251663s 2.223831824s 2.233087126s 2.234320419s 2.234871031s 2.240480703s 2.253894392s 2.271900751s 2.281683165s 2.287131443s 2.289821385s 2.292041572s 2.292230202s 2.293619868s 2.305588464s 2.308137802s 2.308204522s 2.309346601s 2.310034644s 2.320300221s 2.331929679s 2.338065379s 2.343022118s 2.350148294s 2.352989959s 2.371435463s 2.388324964s 2.39138514s 2.401411023s 2.40934043s 2.413824404s 2.417306421s 2.420712554s 2.426941736s 2.42835279s 2.444481258s 2.445246472s 2.446192894s 2.447398665s 2.450500791s 2.455193062s 2.455529251s 2.457546864s 2.461021382s 2.468422414s 2.472438287s 2.475540072s 2.489884213s 2.508855459s 2.509808418s 2.51357067s 2.518674234s 2.519178911s 2.526659266s 2.529066634s 2.529082623s 2.530736005s 2.544872673s 2.546469736s 2.547220996s 2.548789705s 2.554778548s 2.565582923s 2.5666742s 2.572636592s 2.592051132s 2.596246072s 2.602542179s 2.602630847s 2.604769849s 2.608188635s 2.612724949s 2.615894283s 2.616414091s 2.627501288s 2.638042747s 2.640666167s 2.641089654s 2.642987423s 2.646880145s 2.647799456s 2.650829494s 2.658788897s 2.673600805s 2.674297843s 2.679780232s 2.680999996s 2.686315384s 2.688220908s 2.689937351s 2.69401804s 2.701305495s 2.702076441s 2.712019574s 2.724308113s 2.73343407s 2.756427507s 2.75915714s 2.764933146s 2.782758878s 2.784056399s 2.788829306s 2.800637127s 2.801076992s 2.802793712s 2.80439638s 2.805058616s 2.81141565s 2.813077183s 2.813840325s 2.815286378s 2.817513313s 2.821721608s 2.836137135s 2.844199268s 2.845955834s 2.855388918s 2.858264137s 2.860544628s 2.876201344s 2.876263649s 2.876358891s 2.885182178s 2.886389916s 2.933704559s 2.937065626s 2.942056437s 2.994388289s 3.013845797s 3.016636292s 3.039735123s 3.073010343s 3.103261746s 3.119986733s 3.169106559s 3.178225818s 3.204164292s 3.217012845s 3.219012946s 3.270315655s 3.277865479s 3.298140677s 3.319012278s 3.417542555s 3.532549522s 4.16769148s 4.169716192s 4.17557761s 4.278952236s 4.280161231s 4.294466737s 4.340929193s 4.353915606s 4.379101837s 4.465364149s 4.489427544s 4.524574461s 4.527259428s 4.54930795s]
Jan  1 11:55:03.709: INFO: 50 %ile: 2.547220996s
Jan  1 11:55:03.709: INFO: 90 %ile: 3.270315655s
Jan  1 11:55:03.709: INFO: 99 %ile: 4.527259428s
Jan  1 11:55:03.709: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:55:03.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-mkxl6" for this suite.
Jan  1 11:55:59.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:55:59.848: INFO: namespace: e2e-tests-svc-latency-mkxl6, resource: bindings, ignored listing per whitelist
Jan  1 11:55:59.922: INFO: namespace e2e-tests-svc-latency-mkxl6 deletion completed in 56.20003373s

• [SLOW TEST:101.346 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:55:59.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-s88z
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 11:56:00.167: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-s88z" in namespace "e2e-tests-subpath-9m5jk" to be "success or failure"
Jan  1 11:56:00.191: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 23.199614ms
Jan  1 11:56:02.514: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346763652s
Jan  1 11:56:04.542: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374702046s
Jan  1 11:56:07.020: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852256979s
Jan  1 11:56:09.062: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894659395s
Jan  1 11:56:11.075: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.90751065s
Jan  1 11:56:13.101: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.9334347s
Jan  1 11:56:15.127: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.959316595s
Jan  1 11:56:17.150: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=true. Elapsed: 16.98237317s
Jan  1 11:56:19.166: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 18.998704828s
Jan  1 11:56:21.190: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 21.02255592s
Jan  1 11:56:23.208: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 23.040286535s
Jan  1 11:56:25.220: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 25.052354966s
Jan  1 11:56:27.244: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 27.076502212s
Jan  1 11:56:29.266: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 29.097961644s
Jan  1 11:56:31.294: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 31.126114636s
Jan  1 11:56:33.320: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Running", Reason="", readiness=false. Elapsed: 33.152089204s
Jan  1 11:56:35.348: INFO: Pod "pod-subpath-test-projected-s88z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.180198474s
STEP: Saw pod success
Jan  1 11:56:35.348: INFO: Pod "pod-subpath-test-projected-s88z" satisfied condition "success or failure"
Jan  1 11:56:35.357: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-s88z container test-container-subpath-projected-s88z: 
STEP: delete the pod
Jan  1 11:56:35.456: INFO: Waiting for pod pod-subpath-test-projected-s88z to disappear
Jan  1 11:56:35.552: INFO: Pod pod-subpath-test-projected-s88z no longer exists
STEP: Deleting pod pod-subpath-test-projected-s88z
Jan  1 11:56:35.552: INFO: Deleting pod "pod-subpath-test-projected-s88z" in namespace "e2e-tests-subpath-9m5jk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:56:35.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9m5jk" for this suite.
Jan  1 11:56:41.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:56:41.721: INFO: namespace: e2e-tests-subpath-9m5jk, resource: bindings, ignored listing per whitelist
Jan  1 11:56:41.899: INFO: namespace e2e-tests-subpath-9m5jk deletion completed in 6.333883311s

• [SLOW TEST:41.976 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:56:41.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 11:56:42.170: INFO: Waiting up to 5m0s for pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-pq2bt" to be "success or failure"
Jan  1 11:56:42.334: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.790945ms
Jan  1 11:56:44.362: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191543241s
Jan  1 11:56:46.384: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214466711s
Jan  1 11:56:48.495: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324730943s
Jan  1 11:56:50.944: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.774354718s
Jan  1 11:56:52.957: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.786936794s
STEP: Saw pod success
Jan  1 11:56:52.957: INFO: Pod "pod-c68caefe-2c8d-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:56:52.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c68caefe-2c8d-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 11:56:53.439: INFO: Waiting for pod pod-c68caefe-2c8d-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:56:53.767: INFO: Pod pod-c68caefe-2c8d-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:56:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pq2bt" for this suite.
Jan  1 11:56:59.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:57:00.009: INFO: namespace: e2e-tests-emptydir-pq2bt, resource: bindings, ignored listing per whitelist
Jan  1 11:57:00.058: INFO: namespace e2e-tests-emptydir-pq2bt deletion completed in 6.262544549s

• [SLOW TEST:18.159 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:57:00.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  1 11:57:00.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792301,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 11:57:00.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792301,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  1 11:57:10.441: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792314,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  1 11:57:10.442: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792314,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  1 11:57:20.488: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792327,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 11:57:20.489: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792327,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  1 11:57:30.556: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792340,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 11:57:30.556: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-a,UID:d1651fdb-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792340,Generation:0,CreationTimestamp:2020-01-01 11:57:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  1 11:57:40.600: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-b,UID:e960ff4a-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792353,Generation:0,CreationTimestamp:2020-01-01 11:57:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 11:57:40.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-b,UID:e960ff4a-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792353,Generation:0,CreationTimestamp:2020-01-01 11:57:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  1 11:57:50.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-b,UID:e960ff4a-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792366,Generation:0,CreationTimestamp:2020-01-01 11:57:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 11:57:50.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gs942,SelfLink:/api/v1/namespaces/e2e-tests-watch-gs942/configmaps/e2e-watch-test-configmap-b,UID:e960ff4a-2c8d-11ea-a994-fa163e34d433,ResourceVersion:16792366,Generation:0,CreationTimestamp:2020-01-01 11:57:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:58:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-gs942" for this suite.
Jan  1 11:58:06.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:58:06.802: INFO: namespace: e2e-tests-watch-gs942, resource: bindings, ignored listing per whitelist
Jan  1 11:58:06.904: INFO: namespace e2e-tests-watch-gs942 deletion completed in 6.246554692s

• [SLOW TEST:66.846 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:58:06.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  1 11:58:07.154: INFO: Waiting up to 5m0s for pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-6chff" to be "success or failure"
Jan  1 11:58:07.343: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 189.052001ms
Jan  1 11:58:09.356: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201302114s
Jan  1 11:58:11.382: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227777452s
Jan  1 11:58:13.663: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508497071s
Jan  1 11:58:15.685: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530967024s
Jan  1 11:58:17.731: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.576652465s
STEP: Saw pod success
Jan  1 11:58:17.731: INFO: Pod "pod-f93392ea-2c8d-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:58:17.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f93392ea-2c8d-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 11:58:18.661: INFO: Waiting for pod pod-f93392ea-2c8d-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:58:18.716: INFO: Pod pod-f93392ea-2c8d-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:58:18.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6chff" for this suite.
Jan  1 11:58:24.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:58:25.045: INFO: namespace: e2e-tests-emptydir-6chff, resource: bindings, ignored listing per whitelist
Jan  1 11:58:25.169: INFO: namespace e2e-tests-emptydir-6chff deletion completed in 6.433992167s

• [SLOW TEST:18.265 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:58:25.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-04099067-2c8e-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 11:58:25.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-pv952" to be "success or failure"
Jan  1 11:58:25.376: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.443299ms
Jan  1 11:58:27.406: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086605157s
Jan  1 11:58:29.422: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103102195s
Jan  1 11:58:31.620: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300774622s
Jan  1 11:58:33.959: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.640285518s
Jan  1 11:58:35.985: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665796892s
STEP: Saw pod success
Jan  1 11:58:35.985: INFO: Pod "pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:58:35.991: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 11:58:36.515: INFO: Waiting for pod pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:58:36.640: INFO: Pod pod-projected-secrets-040a8bac-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:58:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pv952" for this suite.
Jan  1 11:58:42.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:58:42.947: INFO: namespace: e2e-tests-projected-pv952, resource: bindings, ignored listing per whitelist
Jan  1 11:58:42.996: INFO: namespace e2e-tests-projected-pv952 deletion completed in 6.342185982s

• [SLOW TEST:17.825 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:58:42.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  1 11:58:43.140: INFO: Waiting up to 5m0s for pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-qjtms" to be "success or failure"
Jan  1 11:58:43.146: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.624828ms
Jan  1 11:58:45.158: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017965982s
Jan  1 11:58:47.177: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036312864s
Jan  1 11:58:49.610: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4696871s
Jan  1 11:58:51.626: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485417434s
Jan  1 11:58:53.645: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504477154s
STEP: Saw pod success
Jan  1 11:58:53.645: INFO: Pod "downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:58:53.650: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 11:58:54.427: INFO: Waiting for pod downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:58:54.489: INFO: Pod downward-api-0ea31e44-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:58:54.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qjtms" for this suite.
Jan  1 11:59:00.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:59:00.956: INFO: namespace: e2e-tests-downward-api-qjtms, resource: bindings, ignored listing per whitelist
Jan  1 11:59:00.962: INFO: namespace e2e-tests-downward-api-qjtms deletion completed in 6.267251804s

• [SLOW TEST:17.966 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:59:00.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 11:59:01.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-f7mdq" to be "success or failure"
Jan  1 11:59:01.211: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.108901ms
Jan  1 11:59:03.234: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06445397s
Jan  1 11:59:05.248: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079107674s
Jan  1 11:59:07.272: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102523308s
Jan  1 11:59:09.609: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.439503142s
Jan  1 11:59:11.624: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454949411s
STEP: Saw pod success
Jan  1 11:59:11.624: INFO: Pod "downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:59:11.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 11:59:12.958: INFO: Waiting for pod downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:59:13.220: INFO: Pod downwardapi-volume-19692256-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:59:13.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f7mdq" for this suite.
Jan  1 11:59:19.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:59:19.611: INFO: namespace: e2e-tests-projected-f7mdq, resource: bindings, ignored listing per whitelist
Jan  1 11:59:19.778: INFO: namespace e2e-tests-projected-f7mdq deletion completed in 6.343098041s

• [SLOW TEST:18.816 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:59:19.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:59:20.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7ntq5" for this suite.
Jan  1 11:59:26.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:59:26.229: INFO: namespace: e2e-tests-kubelet-test-7ntq5, resource: bindings, ignored listing per whitelist
Jan  1 11:59:26.252: INFO: namespace e2e-tests-kubelet-test-7ntq5 deletion completed in 6.20188726s

• [SLOW TEST:6.474 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:59:26.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  1 11:59:26.563: INFO: Waiting up to 5m0s for pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-vpbgc" to be "success or failure"
Jan  1 11:59:26.636: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.91042ms
Jan  1 11:59:28.653: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089692234s
Jan  1 11:59:30.697: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132941801s
Jan  1 11:59:32.715: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15124248s
Jan  1 11:59:34.997: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.43321153s
Jan  1 11:59:37.012: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.448262617s
STEP: Saw pod success
Jan  1 11:59:37.012: INFO: Pod "downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 11:59:37.017: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 11:59:37.217: INFO: Waiting for pod downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 11:59:37.457: INFO: Pod downward-api-2885577c-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:59:37.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vpbgc" for this suite.
Jan  1 11:59:43.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 11:59:43.662: INFO: namespace: e2e-tests-downward-api-vpbgc, resource: bindings, ignored listing per whitelist
Jan  1 11:59:44.001: INFO: namespace e2e-tests-downward-api-vpbgc deletion completed in 6.522956271s

• [SLOW TEST:17.748 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 11:59:44.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  1 11:59:44.837: INFO: created pod pod-service-account-defaultsa
Jan  1 11:59:44.837: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  1 11:59:44.878: INFO: created pod pod-service-account-mountsa
Jan  1 11:59:44.878: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  1 11:59:44.985: INFO: created pod pod-service-account-nomountsa
Jan  1 11:59:44.985: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  1 11:59:45.026: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  1 11:59:45.026: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  1 11:59:45.055: INFO: created pod pod-service-account-mountsa-mountspec
Jan  1 11:59:45.056: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  1 11:59:45.233: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  1 11:59:45.233: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  1 11:59:45.460: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  1 11:59:45.461: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  1 11:59:46.630: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  1 11:59:46.630: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  1 11:59:47.230: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  1 11:59:47.230: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 11:59:47.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-4zdjv" for this suite.
Jan  1 12:00:16.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:00:16.188: INFO: namespace: e2e-tests-svcaccounts-4zdjv, resource: bindings, ignored listing per whitelist
Jan  1 12:00:16.195: INFO: namespace e2e-tests-svcaccounts-4zdjv deletion completed in 28.484163826s

• [SLOW TEST:32.194 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:00:16.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 12:00:16.391: INFO: Waiting up to 5m0s for pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-q5b8n" to be "success or failure"
Jan  1 12:00:16.397: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.292726ms
Jan  1 12:00:18.688: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295902209s
Jan  1 12:00:20.710: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318567412s
Jan  1 12:00:22.735: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343310277s
Jan  1 12:00:24.752: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360714153s
Jan  1 12:00:26.785: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392858726s
STEP: Saw pod success
Jan  1 12:00:26.785: INFO: Pod "pod-4639f729-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:00:26.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4639f729-2c8e-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:00:26.914: INFO: Waiting for pod pod-4639f729-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:00:26.922: INFO: Pod pod-4639f729-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:00:26.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q5b8n" for this suite.
Jan  1 12:00:32.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:00:33.154: INFO: namespace: e2e-tests-emptydir-q5b8n, resource: bindings, ignored listing per whitelist
Jan  1 12:00:33.158: INFO: namespace e2e-tests-emptydir-q5b8n deletion completed in 6.229142867s

• [SLOW TEST:16.963 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:00:33.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 12:00:33.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-swsfb'
Jan  1 12:00:35.215: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 12:00:35.215: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  1 12:00:35.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-swsfb'
Jan  1 12:00:35.406: INFO: stderr: ""
Jan  1 12:00:35.406: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:00:35.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-swsfb" for this suite.
Jan  1 12:00:41.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:00:41.530: INFO: namespace: e2e-tests-kubectl-swsfb, resource: bindings, ignored listing per whitelist
Jan  1 12:00:41.630: INFO: namespace e2e-tests-kubectl-swsfb deletion completed in 6.213634482s

• [SLOW TEST:8.471 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:00:41.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-556d912a-2c8e-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:00:41.871: INFO: Waiting up to 5m0s for pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-9sl44" to be "success or failure"
Jan  1 12:00:41.891: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.617034ms
Jan  1 12:00:43.922: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050697207s
Jan  1 12:00:45.939: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067140318s
Jan  1 12:00:47.956: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084093254s
Jan  1 12:00:50.133: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260998962s
Jan  1 12:00:52.186: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313991206s
STEP: Saw pod success
Jan  1 12:00:52.186: INFO: Pod "pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:00:52.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 12:00:52.611: INFO: Waiting for pod pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:00:52.681: INFO: Pod pod-secrets-556f7d32-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:00:52.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9sl44" for this suite.
Jan  1 12:00:58.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:00:59.017: INFO: namespace: e2e-tests-secrets-9sl44, resource: bindings, ignored listing per whitelist
Jan  1 12:00:59.022: INFO: namespace e2e-tests-secrets-9sl44 deletion completed in 6.280310237s

• [SLOW TEST:17.392 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:00:59.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  1 12:01:09.266: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5fbbb89a-2c8e-11ea-8bf6-0242ac110005,GenerateName:,Namespace:e2e-tests-events-qbrfj,SelfLink:/api/v1/namespaces/e2e-tests-events-qbrfj/pods/send-events-5fbbb89a-2c8e-11ea-8bf6-0242ac110005,UID:5fbea1e7-2c8e-11ea-a994-fa163e34d433,ResourceVersion:16792918,Generation:0,CreationTimestamp:2020-01-01 12:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 133795308,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r24sk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r24sk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-r24sk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ff8490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ff88e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:00:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:01:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:01:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:00:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-01 12:00:59 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-01 12:01:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://843a539399e649454dbd0dbcacf2a226e2ec4c8cad6a2a2e3e11f4c8e04f376c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  1 12:01:11.282: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  1 12:01:13.304: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:01:13.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-qbrfj" for this suite.
Jan  1 12:01:53.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:01:53.564: INFO: namespace: e2e-tests-events-qbrfj, resource: bindings, ignored listing per whitelist
Jan  1 12:01:53.695: INFO: namespace e2e-tests-events-qbrfj deletion completed in 40.277431807s

• [SLOW TEST:54.673 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:01:53.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-fxglz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fxglz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.118.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.118.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.118.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.118.235_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-fxglz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-fxglz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-fxglz.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-fxglz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fxglz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.118.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.118.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.118.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.118.235_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 12:02:10.463: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.484: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.511: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fxglz from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.526: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.539: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.612: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.666: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.679: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.700: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.719: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.738: INFO: Unable to read 10.97.118.235_udp@PTR from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.766: INFO: Unable to read 10.97.118.235_tcp@PTR from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.805: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.828: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.838: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fxglz from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.851: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fxglz from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.868: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.881: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.892: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.931: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.939: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.949: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.955: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.961: INFO: Unable to read 10.97.118.235_udp@PTR from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.974: INFO: Unable to read 10.97.118.235_tcp@PTR from pod e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005: the server could not find the requested resource (get pods dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005)
Jan  1 12:02:10.974: INFO: Lookups using e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-fxglz wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz wheezy_udp@dns-test-service.e2e-tests-dns-fxglz.svc wheezy_tcp@dns-test-service.e2e-tests-dns-fxglz.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.97.118.235_udp@PTR 10.97.118.235_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-fxglz jessie_tcp@dns-test-service.e2e-tests-dns-fxglz jessie_udp@dns-test-service.e2e-tests-dns-fxglz.svc jessie_tcp@dns-test-service.e2e-tests-dns-fxglz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-fxglz.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-fxglz.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.97.118.235_udp@PTR 10.97.118.235_tcp@PTR]

Jan  1 12:02:16.158: INFO: DNS probes using e2e-tests-dns-fxglz/dns-test-80948b64-2c8e-11ea-8bf6-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:02:16.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-fxglz" for this suite.
Jan  1 12:02:24.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:02:24.678: INFO: namespace: e2e-tests-dns-fxglz, resource: bindings, ignored listing per whitelist
Jan  1 12:02:24.817: INFO: namespace e2e-tests-dns-fxglz deletion completed in 6.620043836s

• [SLOW TEST:31.121 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:02:24.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-92f700fb-2c8e-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:02:25.228: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-cn98s" to be "success or failure"
Jan  1 12:02:25.245: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.903074ms
Jan  1 12:02:27.432: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203009372s
Jan  1 12:02:29.449: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220354241s
Jan  1 12:02:31.705: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476164216s
Jan  1 12:02:33.744: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515500195s
Jan  1 12:02:35.758: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.529460111s
STEP: Saw pod success
Jan  1 12:02:35.758: INFO: Pod "pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:02:35.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 12:02:36.319: INFO: Waiting for pod pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:02:36.676: INFO: Pod pod-projected-configmaps-92fc0ead-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:02:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cn98s" for this suite.
Jan  1 12:02:42.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:02:43.073: INFO: namespace: e2e-tests-projected-cn98s, resource: bindings, ignored listing per whitelist
Jan  1 12:02:43.102: INFO: namespace e2e-tests-projected-cn98s deletion completed in 6.411772021s

• [SLOW TEST:18.284 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:02:43.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:03:44.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-lzp7t" for this suite.
Jan  1 12:03:52.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:03:52.545: INFO: namespace: e2e-tests-container-runtime-lzp7t, resource: bindings, ignored listing per whitelist
Jan  1 12:03:52.661: INFO: namespace e2e-tests-container-runtime-lzp7t deletion completed in 8.248665966s

• [SLOW TEST:69.558 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:03:52.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:03:52.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-v77nf" to be "success or failure"
Jan  1 12:03:52.896: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.348014ms
Jan  1 12:03:54.911: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037099485s
Jan  1 12:03:56.922: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047626262s
Jan  1 12:03:59.149: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275374505s
Jan  1 12:04:01.159: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285348706s
Jan  1 12:04:03.184: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.310083249s
STEP: Saw pod success
Jan  1 12:04:03.185: INFO: Pod "downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:04:03.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:04:03.384: INFO: Waiting for pod downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:04:03.391: INFO: Pod downwardapi-volume-c748813d-2c8e-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:04:03.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-v77nf" for this suite.
Jan  1 12:04:09.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:04:09.498: INFO: namespace: e2e-tests-downward-api-v77nf, resource: bindings, ignored listing per whitelist
Jan  1 12:04:09.582: INFO: namespace e2e-tests-downward-api-v77nf deletion completed in 6.184955366s

• [SLOW TEST:16.920 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:04:09.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  1 12:04:09.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:10.357: INFO: stderr: ""
Jan  1 12:04:10.357: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 12:04:10.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:10.562: INFO: stderr: ""
Jan  1 12:04:10.563: INFO: stdout: "update-demo-nautilus-hzdc6 update-demo-nautilus-rs9wd "
Jan  1 12:04:10.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzdc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:10.724: INFO: stderr: ""
Jan  1 12:04:10.725: INFO: stdout: ""
Jan  1 12:04:10.725: INFO: update-demo-nautilus-hzdc6 is created but not running
Jan  1 12:04:15.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:15.976: INFO: stderr: ""
Jan  1 12:04:15.976: INFO: stdout: "update-demo-nautilus-hzdc6 update-demo-nautilus-rs9wd "
Jan  1 12:04:15.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzdc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:16.153: INFO: stderr: ""
Jan  1 12:04:16.153: INFO: stdout: ""
Jan  1 12:04:16.153: INFO: update-demo-nautilus-hzdc6 is created but not running
Jan  1 12:04:21.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:21.493: INFO: stderr: ""
Jan  1 12:04:21.493: INFO: stdout: "update-demo-nautilus-hzdc6 update-demo-nautilus-rs9wd "
Jan  1 12:04:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzdc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:21.666: INFO: stderr: ""
Jan  1 12:04:21.667: INFO: stdout: ""
Jan  1 12:04:21.667: INFO: update-demo-nautilus-hzdc6 is created but not running
Jan  1 12:04:26.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:26.804: INFO: stderr: ""
Jan  1 12:04:26.805: INFO: stdout: "update-demo-nautilus-hzdc6 update-demo-nautilus-rs9wd "
Jan  1 12:04:26.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzdc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:26.955: INFO: stderr: ""
Jan  1 12:04:26.955: INFO: stdout: "true"
Jan  1 12:04:26.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzdc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:27.132: INFO: stderr: ""
Jan  1 12:04:27.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:04:27.132: INFO: validating pod update-demo-nautilus-hzdc6
Jan  1 12:04:27.160: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:04:27.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:04:27.161: INFO: update-demo-nautilus-hzdc6 is verified up and running
Jan  1 12:04:27.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rs9wd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:27.303: INFO: stderr: ""
Jan  1 12:04:27.304: INFO: stdout: "true"
Jan  1 12:04:27.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rs9wd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:04:27.564: INFO: stderr: ""
Jan  1 12:04:27.564: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:04:27.565: INFO: validating pod update-demo-nautilus-rs9wd
Jan  1 12:04:27.614: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:04:27.614: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:04:27.614: INFO: update-demo-nautilus-rs9wd is verified up and running
STEP: rolling-update to new replication controller
Jan  1 12:04:27.618: INFO: scanned /root for discovery docs: 
Jan  1 12:04:27.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.156: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  1 12:05:04.156: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 12:05:04.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.371: INFO: stderr: ""
Jan  1 12:05:04.372: INFO: stdout: "update-demo-kitten-6sw7f update-demo-kitten-jw4c6 "
Jan  1 12:05:04.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6sw7f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.498: INFO: stderr: ""
Jan  1 12:05:04.498: INFO: stdout: "true"
Jan  1 12:05:04.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6sw7f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.623: INFO: stderr: ""
Jan  1 12:05:04.623: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 12:05:04.623: INFO: validating pod update-demo-kitten-6sw7f
Jan  1 12:05:04.681: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 12:05:04.681: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 12:05:04.681: INFO: update-demo-kitten-6sw7f is verified up and running
Jan  1 12:05:04.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jw4c6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.840: INFO: stderr: ""
Jan  1 12:05:04.840: INFO: stdout: "true"
Jan  1 12:05:04.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jw4c6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gbmr8'
Jan  1 12:05:04.959: INFO: stderr: ""
Jan  1 12:05:04.960: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 12:05:04.960: INFO: validating pod update-demo-kitten-jw4c6
Jan  1 12:05:04.972: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 12:05:04.972: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 12:05:04.972: INFO: update-demo-kitten-jw4c6 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:05:04.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gbmr8" for this suite.
Jan  1 12:05:29.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:05:29.188: INFO: namespace: e2e-tests-kubectl-gbmr8, resource: bindings, ignored listing per whitelist
Jan  1 12:05:29.414: INFO: namespace e2e-tests-kubectl-gbmr8 deletion completed in 24.435824619s

• [SLOW TEST:79.832 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:05:29.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:05:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-b26kq" for this suite.
Jan  1 12:05:47.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:05:47.967: INFO: namespace: e2e-tests-kubelet-test-b26kq, resource: bindings, ignored listing per whitelist
Jan  1 12:05:48.052: INFO: namespace e2e-tests-kubelet-test-b26kq deletion completed in 6.380028848s

• [SLOW TEST:18.636 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:05:48.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  1 12:05:48.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  1 12:05:48.336: INFO: stderr: ""
Jan  1 12:05:48.336: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:05:48.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2hg77" for this suite.
Jan  1 12:05:56.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:05:56.488: INFO: namespace: e2e-tests-kubectl-2hg77, resource: bindings, ignored listing per whitelist
Jan  1 12:05:56.614: INFO: namespace e2e-tests-kubectl-2hg77 deletion completed in 8.270326372s

• [SLOW TEST:8.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:05:56.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  1 12:05:56.916: INFO: Waiting up to 5m0s for pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-var-expansion-bfn9v" to be "success or failure"
Jan  1 12:05:56.939: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.392534ms
Jan  1 12:05:59.230: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313528149s
Jan  1 12:06:01.298: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381450726s
Jan  1 12:06:03.447: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531036964s
Jan  1 12:06:05.464: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547761894s
Jan  1 12:06:07.485: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.569125307s
Jan  1 12:06:09.501: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.584342684s
STEP: Saw pod success
Jan  1 12:06:09.501: INFO: Pod "var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:06:09.507: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 12:06:09.761: INFO: Waiting for pod var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:06:10.054: INFO: Pod var-expansion-11372888-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:06:10.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bfn9v" for this suite.
Jan  1 12:06:16.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:06:16.190: INFO: namespace: e2e-tests-var-expansion-bfn9v, resource: bindings, ignored listing per whitelist
Jan  1 12:06:16.283: INFO: namespace e2e-tests-var-expansion-bfn9v deletion completed in 6.217171835s

• [SLOW TEST:19.668 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:06:16.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 12:06:16.573: INFO: Waiting up to 5m0s for pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-42qvd" to be "success or failure"
Jan  1 12:06:16.675: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.931927ms
Jan  1 12:06:18.689: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115651563s
Jan  1 12:06:20.710: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137164498s
Jan  1 12:06:22.723: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149877648s
Jan  1 12:06:24.754: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181359628s
Jan  1 12:06:26.802: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.229272455s
STEP: Saw pod success
Jan  1 12:06:26.803: INFO: Pod "pod-1cedb678-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:06:26.827: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1cedb678-2c8f-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:06:26.942: INFO: Waiting for pod pod-1cedb678-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:06:26.960: INFO: Pod pod-1cedb678-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:06:26.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-42qvd" for this suite.
Jan  1 12:06:33.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:06:33.077: INFO: namespace: e2e-tests-emptydir-42qvd, resource: bindings, ignored listing per whitelist
Jan  1 12:06:33.110: INFO: namespace e2e-tests-emptydir-42qvd deletion completed in 6.140389139s

• [SLOW TEST:16.827 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:06:33.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:06:33.391: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-t2dqz" to be "success or failure"
Jan  1 12:06:33.408: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.89162ms
Jan  1 12:06:35.485: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09313186s
Jan  1 12:06:37.520: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128431018s
Jan  1 12:06:39.886: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494777816s
Jan  1 12:06:41.916: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.524862551s
Jan  1 12:06:43.952: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.560407136s
Jan  1 12:06:45.973: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.581029834s
STEP: Saw pod success
Jan  1 12:06:45.973: INFO: Pod "downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:06:46.080: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:06:46.265: INFO: Waiting for pod downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:06:46.280: INFO: Pod downwardapi-volume-26f0bc5f-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:06:46.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t2dqz" for this suite.
Jan  1 12:06:52.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:06:52.608: INFO: namespace: e2e-tests-downward-api-t2dqz, resource: bindings, ignored listing per whitelist
Jan  1 12:06:52.644: INFO: namespace e2e-tests-downward-api-t2dqz deletion completed in 6.346956087s

• [SLOW TEST:19.533 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:06:52.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  1 12:06:52.953: INFO: Waiting up to 5m0s for pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-2qbfx" to be "success or failure"
Jan  1 12:06:52.970: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465318ms
Jan  1 12:06:54.990: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036595399s
Jan  1 12:06:57.006: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053053987s
Jan  1 12:06:59.570: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616926272s
Jan  1 12:07:01.646: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693031637s
Jan  1 12:07:03.663: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.709793747s
STEP: Saw pod success
Jan  1 12:07:03.663: INFO: Pod "downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:07:03.671: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 12:07:05.061: INFO: Waiting for pod downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:07:05.151: INFO: Pod downward-api-3293d6e4-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:07:05.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2qbfx" for this suite.
Jan  1 12:07:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:07:11.517: INFO: namespace: e2e-tests-downward-api-2qbfx, resource: bindings, ignored listing per whitelist
Jan  1 12:07:11.526: INFO: namespace e2e-tests-downward-api-2qbfx deletion completed in 6.350561946s

• [SLOW TEST:18.882 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:07:11.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:07:11.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-h9mhf" to be "success or failure"
Jan  1 12:07:11.886: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.177452ms
Jan  1 12:07:13.919: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080052569s
Jan  1 12:07:15.941: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102000088s
Jan  1 12:07:17.956: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117088903s
Jan  1 12:07:19.975: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136330062s
Jan  1 12:07:21.991: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152191386s
Jan  1 12:07:24.008: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.168836873s
STEP: Saw pod success
Jan  1 12:07:24.008: INFO: Pod "downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:07:24.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:07:24.516: INFO: Waiting for pod downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:07:24.558: INFO: Pod downwardapi-volume-3dc6b7c9-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:07:24.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h9mhf" for this suite.
Jan  1 12:07:30.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:07:30.911: INFO: namespace: e2e-tests-downward-api-h9mhf, resource: bindings, ignored listing per whitelist
Jan  1 12:07:30.919: INFO: namespace e2e-tests-downward-api-h9mhf deletion completed in 6.257803876s

• [SLOW TEST:19.392 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:07:30.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:07:31.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-th268" to be "success or failure"
Jan  1 12:07:31.295: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.592677ms
Jan  1 12:07:33.695: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421548266s
Jan  1 12:07:35.709: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435660117s
Jan  1 12:07:37.736: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462360156s
Jan  1 12:07:40.118: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844271539s
Jan  1 12:07:42.313: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.03942529s
STEP: Saw pod success
Jan  1 12:07:42.313: INFO: Pod "downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:07:42.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:07:43.029: INFO: Waiting for pod downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:07:43.038: INFO: Pod downwardapi-volume-49677c0e-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:07:43.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-th268" for this suite.
Jan  1 12:07:49.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:07:49.211: INFO: namespace: e2e-tests-downward-api-th268, resource: bindings, ignored listing per whitelist
Jan  1 12:07:49.227: INFO: namespace e2e-tests-downward-api-th268 deletion completed in 6.180922143s

• [SLOW TEST:18.308 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:07:49.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-544ade0e-2c8f-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:07:49.460: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-q5nm5" to be "success or failure"
Jan  1 12:07:49.485: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.034512ms
Jan  1 12:07:51.518: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058219637s
Jan  1 12:07:53.543: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083394051s
Jan  1 12:07:55.727: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267370514s
Jan  1 12:07:58.467: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006987296s
Jan  1 12:08:00.837: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.376944297s
STEP: Saw pod success
Jan  1 12:08:00.837: INFO: Pod "pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:08:00.846: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 12:08:01.000: INFO: Waiting for pod pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:08:01.008: INFO: Pod pod-projected-secrets-544c2f70-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:08:01.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q5nm5" for this suite.
Jan  1 12:08:07.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:08:07.322: INFO: namespace: e2e-tests-projected-q5nm5, resource: bindings, ignored listing per whitelist
Jan  1 12:08:07.331: INFO: namespace e2e-tests-projected-q5nm5 deletion completed in 6.308876741s

• [SLOW TEST:18.102 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:08:07.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5f1c9d37-2c8f-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:08:07.604: INFO: Waiting up to 5m0s for pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-sbw6q" to be "success or failure"
Jan  1 12:08:07.639: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.129684ms
Jan  1 12:08:09.860: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255443441s
Jan  1 12:08:11.890: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285771385s
Jan  1 12:08:13.948: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343916928s
Jan  1 12:08:15.964: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.359520006s
Jan  1 12:08:17.994: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389896621s
STEP: Saw pod success
Jan  1 12:08:17.995: INFO: Pod "pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:08:18.020: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 12:08:18.160: INFO: Waiting for pod pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:08:18.167: INFO: Pod pod-secrets-5f1d96f8-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:08:18.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sbw6q" for this suite.
Jan  1 12:08:26.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:08:26.433: INFO: namespace: e2e-tests-secrets-sbw6q, resource: bindings, ignored listing per whitelist
Jan  1 12:08:26.498: INFO: namespace e2e-tests-secrets-sbw6q deletion completed in 8.323825023s

• [SLOW TEST:19.166 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:08:26.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:08:26.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-9q2pf" to be "success or failure"
Jan  1 12:08:27.036: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.705409ms
Jan  1 12:08:29.482: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511585335s
Jan  1 12:08:31.499: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528904417s
Jan  1 12:08:33.520: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549492693s
Jan  1 12:08:35.532: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561582769s
Jan  1 12:08:37.847: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.876829494s
STEP: Saw pod success
Jan  1 12:08:37.847: INFO: Pod "downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:08:37.865: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:08:38.152: INFO: Waiting for pod downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:08:38.179: INFO: Pod downwardapi-volume-6aa15922-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:08:38.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9q2pf" for this suite.
Jan  1 12:08:44.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:08:44.358: INFO: namespace: e2e-tests-projected-9q2pf, resource: bindings, ignored listing per whitelist
Jan  1 12:08:44.403: INFO: namespace e2e-tests-projected-9q2pf deletion completed in 6.211417892s

• [SLOW TEST:17.902 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:08:44.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  1 12:08:44.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bqjbv'
Jan  1 12:08:45.048: INFO: stderr: ""
Jan  1 12:08:45.048: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  1 12:08:46.067: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:46.067: INFO: Found 0 / 1
Jan  1 12:08:47.062: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:47.062: INFO: Found 0 / 1
Jan  1 12:08:48.065: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:48.066: INFO: Found 0 / 1
Jan  1 12:08:49.070: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:49.070: INFO: Found 0 / 1
Jan  1 12:08:50.579: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:50.580: INFO: Found 0 / 1
Jan  1 12:08:51.268: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:51.269: INFO: Found 0 / 1
Jan  1 12:08:52.140: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:52.140: INFO: Found 0 / 1
Jan  1 12:08:53.065: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:53.065: INFO: Found 0 / 1
Jan  1 12:08:54.074: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:54.074: INFO: Found 0 / 1
Jan  1 12:08:55.060: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:55.060: INFO: Found 1 / 1
Jan  1 12:08:55.060: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  1 12:08:55.066: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:55.066: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  1 12:08:55.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tfsv5 --namespace=e2e-tests-kubectl-bqjbv -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  1 12:08:55.383: INFO: stderr: ""
Jan  1 12:08:55.384: INFO: stdout: "pod/redis-master-tfsv5 patched\n"
STEP: checking annotations
Jan  1 12:08:55.545: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:08:55.545: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:08:55.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bqjbv" for this suite.
Jan  1 12:09:19.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:09:19.636: INFO: namespace: e2e-tests-kubectl-bqjbv, resource: bindings, ignored listing per whitelist
Jan  1 12:09:19.831: INFO: namespace e2e-tests-kubectl-bqjbv deletion completed in 24.268251079s

• [SLOW TEST:35.428 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:09:19.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-86khb
Jan  1 12:09:30.116: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-86khb
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 12:09:30.123: INFO: Initial restart count of pod liveness-http is 0
Jan  1 12:09:50.396: INFO: Restart count of pod e2e-tests-container-probe-86khb/liveness-http is now 1 (20.273199715s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:09:50.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-86khb" for this suite.
Jan  1 12:09:56.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:09:56.672: INFO: namespace: e2e-tests-container-probe-86khb, resource: bindings, ignored listing per whitelist
Jan  1 12:09:56.683: INFO: namespace e2e-tests-container-probe-86khb deletion completed in 6.166863825s

• [SLOW TEST:36.852 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:09:56.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a06df8ff-2c8f-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:09:57.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-wdw92" to be "success or failure"
Jan  1 12:09:57.242: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.854861ms
Jan  1 12:09:59.607: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382922278s
Jan  1 12:10:01.653: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429059684s
Jan  1 12:10:03.722: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498133432s
Jan  1 12:10:05.971: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.746572423s
Jan  1 12:10:08.004: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.779364727s
STEP: Saw pod success
Jan  1 12:10:08.004: INFO: Pod "pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:10:08.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 12:10:08.150: INFO: Waiting for pod pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:10:08.204: INFO: Pod pod-configmaps-a070d2db-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:10:08.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wdw92" for this suite.
Jan  1 12:10:14.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:10:14.327: INFO: namespace: e2e-tests-configmap-wdw92, resource: bindings, ignored listing per whitelist
Jan  1 12:10:14.382: INFO: namespace e2e-tests-configmap-wdw92 deletion completed in 6.16720055s

• [SLOW TEST:17.698 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:10:14.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:10:25.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-z5gl5" for this suite.
Jan  1 12:10:51.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:10:51.956: INFO: namespace: e2e-tests-replication-controller-z5gl5, resource: bindings, ignored listing per whitelist
Jan  1 12:10:52.161: INFO: namespace e2e-tests-replication-controller-z5gl5 deletion completed in 26.402688433s

• [SLOW TEST:37.779 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:10:52.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  1 12:11:02.984: INFO: Successfully updated pod "pod-update-c1495318-2c8f-11ea-8bf6-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  1 12:11:03.089: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:11:03.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-q6f7q" for this suite.
Jan  1 12:11:27.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:11:27.220: INFO: namespace: e2e-tests-pods-q6f7q, resource: bindings, ignored listing per whitelist
Jan  1 12:11:27.297: INFO: namespace e2e-tests-pods-q6f7q deletion completed in 24.201715105s

• [SLOW TEST:35.135 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:11:27.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d63c46bc-2c8f-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:11:27.500: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-qh7nl" to be "success or failure"
Jan  1 12:11:27.599: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.738359ms
Jan  1 12:11:29.615: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114941714s
Jan  1 12:11:31.635: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135312751s
Jan  1 12:11:33.790: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29005473s
Jan  1 12:11:35.812: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312189382s
Jan  1 12:11:37.836: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.335975425s
Jan  1 12:11:39.868: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.368011174s
STEP: Saw pod success
Jan  1 12:11:39.868: INFO: Pod "pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:11:39.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 12:11:40.006: INFO: Waiting for pod pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:11:40.020: INFO: Pod pod-projected-secrets-d63e58d9-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:11:40.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qh7nl" for this suite.
Jan  1 12:11:46.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:11:46.266: INFO: namespace: e2e-tests-projected-qh7nl, resource: bindings, ignored listing per whitelist
Jan  1 12:11:46.303: INFO: namespace e2e-tests-projected-qh7nl deletion completed in 6.274289253s

• [SLOW TEST:19.006 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:11:46.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-gq6qn/secret-test-e1a7ddbb-2c8f-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:11:46.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-gq6qn" to be "success or failure"
Jan  1 12:11:46.724: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.546709ms
Jan  1 12:11:48.741: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045036551s
Jan  1 12:11:50.754: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058174709s
Jan  1 12:11:52.928: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232105837s
Jan  1 12:11:55.014: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318142102s
Jan  1 12:11:57.029: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332702119s
STEP: Saw pod success
Jan  1 12:11:57.029: INFO: Pod "pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:11:57.036: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005 container env-test: 
STEP: delete the pod
Jan  1 12:11:57.190: INFO: Waiting for pod pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:11:58.038: INFO: Pod pod-configmaps-e1b1441c-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:11:58.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gq6qn" for this suite.
Jan  1 12:12:04.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:12:04.255: INFO: namespace: e2e-tests-secrets-gq6qn, resource: bindings, ignored listing per whitelist
Jan  1 12:12:04.261: INFO: namespace e2e-tests-secrets-gq6qn deletion completed in 6.196188127s

• [SLOW TEST:17.958 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:12:04.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:12:05.684: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ec6b2c81-2c8f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f32e62), BlockOwnerDeletion:(*bool)(0xc001f32e63)}}
Jan  1 12:12:05.842: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ec553e4e-2c8f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f33052), BlockOwnerDeletion:(*bool)(0xc001f33053)}}
Jan  1 12:12:07.677: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ec65e5b0-2c8f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002a164a2), BlockOwnerDeletion:(*bool)(0xc002a164a3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:12:13.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-sf2xm" for this suite.
Jan  1 12:12:19.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:12:19.351: INFO: namespace: e2e-tests-gc-sf2xm, resource: bindings, ignored listing per whitelist
Jan  1 12:12:19.386: INFO: namespace e2e-tests-gc-sf2xm deletion completed in 6.226354724s

• [SLOW TEST:15.124 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:12:19.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  1 12:12:19.687: INFO: Waiting up to 5m0s for pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005" in namespace "e2e-tests-var-expansion-blnt9" to be "success or failure"
Jan  1 12:12:19.699: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.020681ms
Jan  1 12:12:21.717: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0295591s
Jan  1 12:12:23.750: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062871302s
Jan  1 12:12:25.936: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247938708s
Jan  1 12:12:28.446: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758135576s
Jan  1 12:12:30.461: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.773379082s
STEP: Saw pod success
Jan  1 12:12:30.461: INFO: Pod "var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:12:30.467: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 12:12:30.587: INFO: Waiting for pod var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:12:30.608: INFO: Pod var-expansion-f554dca7-2c8f-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:12:30.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-blnt9" for this suite.
Jan  1 12:12:36.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:12:36.842: INFO: namespace: e2e-tests-var-expansion-blnt9, resource: bindings, ignored listing per whitelist
Jan  1 12:12:36.886: INFO: namespace e2e-tests-var-expansion-blnt9 deletion completed in 6.192609994s

• [SLOW TEST:17.500 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:12:36.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:12:37.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-db5f4" for this suite.
Jan  1 12:13:01.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:13:01.463: INFO: namespace: e2e-tests-pods-db5f4, resource: bindings, ignored listing per whitelist
Jan  1 12:13:01.573: INFO: namespace e2e-tests-pods-db5f4 deletion completed in 24.328310316s

• [SLOW TEST:24.685 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:13:01.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  1 12:13:11.951: INFO: Pod pod-hostip-0e775187-2c90-11ea-8bf6-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:13:11.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8d4zr" for this suite.
Jan  1 12:13:36.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:13:36.166: INFO: namespace: e2e-tests-pods-8d4zr, resource: bindings, ignored listing per whitelist
Jan  1 12:13:36.181: INFO: namespace e2e-tests-pods-8d4zr deletion completed in 24.223500014s

• [SLOW TEST:34.608 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:13:36.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:13:36.691: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  1 12:13:36.711: INFO: Number of nodes with available pods: 0
Jan  1 12:13:36.711: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  1 12:13:36.925: INFO: Number of nodes with available pods: 0
Jan  1 12:13:36.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:37.943: INFO: Number of nodes with available pods: 0
Jan  1 12:13:37.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:38.944: INFO: Number of nodes with available pods: 0
Jan  1 12:13:38.945: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:40.848: INFO: Number of nodes with available pods: 0
Jan  1 12:13:40.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:40.969: INFO: Number of nodes with available pods: 0
Jan  1 12:13:40.969: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:41.960: INFO: Number of nodes with available pods: 0
Jan  1 12:13:41.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:42.988: INFO: Number of nodes with available pods: 0
Jan  1 12:13:42.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:44.285: INFO: Number of nodes with available pods: 0
Jan  1 12:13:44.285: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:44.976: INFO: Number of nodes with available pods: 0
Jan  1 12:13:44.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:45.944: INFO: Number of nodes with available pods: 0
Jan  1 12:13:45.944: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:46.960: INFO: Number of nodes with available pods: 0
Jan  1 12:13:46.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:47.947: INFO: Number of nodes with available pods: 1
Jan  1 12:13:47.948: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  1 12:13:48.031: INFO: Number of nodes with available pods: 1
Jan  1 12:13:48.032: INFO: Number of running nodes: 0, number of available pods: 1
Jan  1 12:13:49.152: INFO: Number of nodes with available pods: 0
Jan  1 12:13:49.152: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  1 12:13:49.183: INFO: Number of nodes with available pods: 0
Jan  1 12:13:49.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:50.196: INFO: Number of nodes with available pods: 0
Jan  1 12:13:50.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:51.455: INFO: Number of nodes with available pods: 0
Jan  1 12:13:51.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:52.212: INFO: Number of nodes with available pods: 0
Jan  1 12:13:52.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:53.204: INFO: Number of nodes with available pods: 0
Jan  1 12:13:53.204: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:54.204: INFO: Number of nodes with available pods: 0
Jan  1 12:13:54.205: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:55.228: INFO: Number of nodes with available pods: 0
Jan  1 12:13:55.228: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:56.202: INFO: Number of nodes with available pods: 0
Jan  1 12:13:56.202: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:57.207: INFO: Number of nodes with available pods: 0
Jan  1 12:13:57.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:58.201: INFO: Number of nodes with available pods: 0
Jan  1 12:13:58.202: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:13:59.201: INFO: Number of nodes with available pods: 0
Jan  1 12:13:59.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:00.200: INFO: Number of nodes with available pods: 0
Jan  1 12:14:00.200: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:01.197: INFO: Number of nodes with available pods: 0
Jan  1 12:14:01.197: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:02.273: INFO: Number of nodes with available pods: 0
Jan  1 12:14:02.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:03.198: INFO: Number of nodes with available pods: 0
Jan  1 12:14:03.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:04.383: INFO: Number of nodes with available pods: 0
Jan  1 12:14:04.383: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:05.208: INFO: Number of nodes with available pods: 0
Jan  1 12:14:05.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:06.227: INFO: Number of nodes with available pods: 0
Jan  1 12:14:06.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:07.319: INFO: Number of nodes with available pods: 0
Jan  1 12:14:07.319: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:08.923: INFO: Number of nodes with available pods: 0
Jan  1 12:14:08.924: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:09.214: INFO: Number of nodes with available pods: 0
Jan  1 12:14:09.214: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:10.218: INFO: Number of nodes with available pods: 0
Jan  1 12:14:10.219: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:11.198: INFO: Number of nodes with available pods: 0
Jan  1 12:14:11.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:12.242: INFO: Number of nodes with available pods: 0
Jan  1 12:14:12.242: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:14:13.219: INFO: Number of nodes with available pods: 1
Jan  1 12:14:13.219: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2672d, will wait for the garbage collector to delete the pods
Jan  1 12:14:13.371: INFO: Deleting DaemonSet.extensions daemon-set took: 79.843771ms
Jan  1 12:14:13.472: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.833334ms
Jan  1 12:14:22.678: INFO: Number of nodes with available pods: 0
Jan  1 12:14:22.678: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 12:14:22.688: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2672d/daemonsets","resourceVersion":"16794760"},"items":null}

Jan  1 12:14:22.691: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2672d/pods","resourceVersion":"16794760"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:14:22.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2672d" for this suite.
Jan  1 12:14:28.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:14:28.969: INFO: namespace: e2e-tests-daemonsets-2672d, resource: bindings, ignored listing per whitelist
Jan  1 12:14:29.093: INFO: namespace e2e-tests-daemonsets-2672d deletion completed in 6.341287451s

• [SLOW TEST:52.911 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:14:29.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-429c43ab-2c90-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:14:29.282: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-pb5hz" to be "success or failure"
Jan  1 12:14:29.346: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.731358ms
Jan  1 12:14:31.363: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080833633s
Jan  1 12:14:33.380: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097797771s
Jan  1 12:14:35.639: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356548999s
Jan  1 12:14:37.652: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369465052s
Jan  1 12:14:39.671: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.388375802s
STEP: Saw pod success
Jan  1 12:14:39.671: INFO: Pod "pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:14:39.678: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 12:14:40.507: INFO: Waiting for pod pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:14:40.655: INFO: Pod pod-projected-secrets-429d39e3-2c90-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:14:40.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pb5hz" for this suite.
Jan  1 12:14:46.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:14:46.795: INFO: namespace: e2e-tests-projected-pb5hz, resource: bindings, ignored listing per whitelist
Jan  1 12:14:46.874: INFO: namespace e2e-tests-projected-pb5hz deletion completed in 6.192688899s

• [SLOW TEST:17.780 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:14:46.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  1 12:15:07.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:07.445: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:09.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:09.462: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:11.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:11.462: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:13.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:13.468: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:15.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:15.466: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:17.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:17.467: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:19.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:19.461: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:21.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:21.464: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:23.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:23.458: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:25.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:25.479: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:27.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:27.468: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:29.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:29.462: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:31.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:31.465: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 12:15:33.446: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 12:15:33.458: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:15:33.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2dx9x" for this suite.
Jan  1 12:15:57.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:15:57.634: INFO: namespace: e2e-tests-container-lifecycle-hook-2dx9x, resource: bindings, ignored listing per whitelist
Jan  1 12:15:57.721: INFO: namespace e2e-tests-container-lifecycle-hook-2dx9x deletion completed in 24.217554912s

• [SLOW TEST:70.847 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:15:57.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-77747a27-2c90-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:15:57.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-8r5x8" to be "success or failure"
Jan  1 12:15:57.976: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.572204ms
Jan  1 12:16:00.097: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135092266s
Jan  1 12:16:02.124: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16186351s
Jan  1 12:16:04.258: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296552474s
Jan  1 12:16:06.274: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311850403s
Jan  1 12:16:08.292: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.330194219s
STEP: Saw pod success
Jan  1 12:16:08.292: INFO: Pod "pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:16:08.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 12:16:08.389: INFO: Waiting for pod pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:16:08.411: INFO: Pod pod-configmaps-7777b2c0-2c90-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:16:08.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8r5x8" for this suite.
Jan  1 12:16:16.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:16:16.610: INFO: namespace: e2e-tests-configmap-8r5x8, resource: bindings, ignored listing per whitelist
Jan  1 12:16:16.679: INFO: namespace e2e-tests-configmap-8r5x8 deletion completed in 8.252448913s

• [SLOW TEST:18.958 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:16:16.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-82cda7dc-2c90-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:16:17.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-5gdc7" to be "success or failure"
Jan  1 12:16:17.165: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.797371ms
Jan  1 12:16:19.190: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078108136s
Jan  1 12:16:21.224: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111495515s
Jan  1 12:16:23.450: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337771403s
Jan  1 12:16:25.494: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381502921s
Jan  1 12:16:27.504: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.391467537s
Jan  1 12:16:29.552: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.439463635s
STEP: Saw pod success
Jan  1 12:16:29.552: INFO: Pod "pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:16:29.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 12:16:29.737: INFO: Waiting for pod pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:16:29.743: INFO: Pod pod-projected-configmaps-82ced38a-2c90-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:16:29.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5gdc7" for this suite.
Jan  1 12:16:35.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:16:35.919: INFO: namespace: e2e-tests-projected-5gdc7, resource: bindings, ignored listing per whitelist
Jan  1 12:16:35.965: INFO: namespace e2e-tests-projected-5gdc7 deletion completed in 6.214556443s

• [SLOW TEST:19.286 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:16:35.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rb26j
Jan  1 12:16:46.244: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rb26j
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 12:16:46.254: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:20:48.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rb26j" for this suite.
Jan  1 12:20:56.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:20:56.687: INFO: namespace: e2e-tests-container-probe-rb26j, resource: bindings, ignored listing per whitelist
Jan  1 12:20:56.687: INFO: namespace e2e-tests-container-probe-rb26j deletion completed in 8.294744458s

• [SLOW TEST:260.721 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:20:56.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2v6b5
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  1 12:20:57.123: INFO: Found 0 stateful pods, waiting for 3
Jan  1 12:21:07.160: INFO: Found 2 stateful pods, waiting for 3
Jan  1 12:21:17.281: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:21:17.282: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:21:17.282: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 12:21:27.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:21:27.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:21:27.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  1 12:21:27.212: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  1 12:21:37.408: INFO: Updating stateful set ss2
Jan  1 12:21:37.486: INFO: Waiting for Pod e2e-tests-statefulset-2v6b5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 12:21:47.513: INFO: Waiting for Pod e2e-tests-statefulset-2v6b5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  1 12:21:58.232: INFO: Found 2 stateful pods, waiting for 3
Jan  1 12:22:08.662: INFO: Found 2 stateful pods, waiting for 3
Jan  1 12:22:18.264: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:22:18.264: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:22:18.264: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 12:22:28.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:22:28.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 12:22:28.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  1 12:22:28.309: INFO: Updating stateful set ss2
Jan  1 12:22:28.430: INFO: Waiting for Pod e2e-tests-statefulset-2v6b5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 12:22:38.585: INFO: Updating stateful set ss2
Jan  1 12:22:38.778: INFO: Waiting for StatefulSet e2e-tests-statefulset-2v6b5/ss2 to complete update
Jan  1 12:22:38.778: INFO: Waiting for Pod e2e-tests-statefulset-2v6b5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 12:22:48.798: INFO: Waiting for StatefulSet e2e-tests-statefulset-2v6b5/ss2 to complete update
Jan  1 12:22:48.798: INFO: Waiting for Pod e2e-tests-statefulset-2v6b5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 12:22:58.809: INFO: Waiting for StatefulSet e2e-tests-statefulset-2v6b5/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 12:23:08.812: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2v6b5
Jan  1 12:23:08.818: INFO: Scaling statefulset ss2 to 0
Jan  1 12:23:48.979: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 12:23:48.991: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:23:49.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2v6b5" for this suite.
Jan  1 12:23:57.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:23:57.203: INFO: namespace: e2e-tests-statefulset-2v6b5, resource: bindings, ignored listing per whitelist
Jan  1 12:23:57.334: INFO: namespace e2e-tests-statefulset-2v6b5 deletion completed in 8.285753303s

• [SLOW TEST:180.647 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:23:57.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:24:07.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-9j5m5" for this suite.
Jan  1 12:24:49.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:24:50.007: INFO: namespace: e2e-tests-kubelet-test-9j5m5, resource: bindings, ignored listing per whitelist
Jan  1 12:24:50.038: INFO: namespace e2e-tests-kubelet-test-9j5m5 deletion completed in 42.17389989s

• [SLOW TEST:52.702 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:24:50.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:24:50.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-d7lgj" to be "success or failure"
Jan  1 12:24:50.376: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.325697ms
Jan  1 12:24:52.390: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062198527s
Jan  1 12:24:54.417: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089590167s
Jan  1 12:24:57.087: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.75899969s
Jan  1 12:24:59.107: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.778744372s
Jan  1 12:25:01.130: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.80262985s
STEP: Saw pod success
Jan  1 12:25:01.131: INFO: Pod "downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:25:01.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:25:01.214: INFO: Waiting for pod downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:25:01.341: INFO: Pod downwardapi-volume-b4c559d6-2c91-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:25:01.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d7lgj" for this suite.
Jan  1 12:25:07.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:25:07.581: INFO: namespace: e2e-tests-downward-api-d7lgj, resource: bindings, ignored listing per whitelist
Jan  1 12:25:07.649: INFO: namespace e2e-tests-downward-api-d7lgj deletion completed in 6.298759833s

• [SLOW TEST:17.611 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:25:07.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  1 12:25:07.854: INFO: Waiting up to 5m0s for pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005" in namespace "e2e-tests-var-expansion-d5vld" to be "success or failure"
Jan  1 12:25:07.867: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.194542ms
Jan  1 12:25:09.912: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0575325s
Jan  1 12:25:11.944: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088920349s
Jan  1 12:25:14.082: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22699277s
Jan  1 12:25:16.094: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238894726s
Jan  1 12:25:18.456: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.600704204s
STEP: Saw pod success
Jan  1 12:25:18.456: INFO: Pod "var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:25:18.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 12:25:18.688: INFO: Waiting for pod var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:25:18.696: INFO: Pod var-expansion-bf38d9b7-2c91-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:25:18.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-d5vld" for this suite.
Jan  1 12:25:24.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:25:24.784: INFO: namespace: e2e-tests-var-expansion-d5vld, resource: bindings, ignored listing per whitelist
Jan  1 12:25:24.912: INFO: namespace e2e-tests-var-expansion-d5vld deletion completed in 6.208950231s

• [SLOW TEST:17.263 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:25:24.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:25:25.471: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.720193ms)
Jan  1 12:25:25.488: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.376091ms)
Jan  1 12:25:25.495: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.35582ms)
Jan  1 12:25:25.500: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.058819ms)
Jan  1 12:25:25.505: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.37606ms)
Jan  1 12:25:25.511: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.951516ms)
Jan  1 12:25:25.520: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.677102ms)
Jan  1 12:25:25.570: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.46738ms)
Jan  1 12:25:25.587: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.583552ms)
Jan  1 12:25:25.603: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.017463ms)
Jan  1 12:25:25.610: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.940446ms)
Jan  1 12:25:25.618: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.453872ms)
Jan  1 12:25:25.623: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.004828ms)
Jan  1 12:25:25.631: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.689937ms)
Jan  1 12:25:25.636: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.087251ms)
Jan  1 12:25:25.642: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.35617ms)
Jan  1 12:25:25.647: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.527758ms)
Jan  1 12:25:25.652: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.120113ms)
Jan  1 12:25:25.657: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.788315ms)
Jan  1 12:25:25.662: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.419387ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:25:25.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-g84cm" for this suite.
Jan  1 12:25:31.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:25:31.997: INFO: namespace: e2e-tests-proxy-g84cm, resource: bindings, ignored listing per whitelist
Jan  1 12:25:32.012: INFO: namespace e2e-tests-proxy-g84cm deletion completed in 6.30542489s

• [SLOW TEST:7.099 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:25:32.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:25:32.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-6jwl2" to be "success or failure"
Jan  1 12:25:32.255: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.855189ms
Jan  1 12:25:34.278: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031780442s
Jan  1 12:25:36.290: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043453711s
Jan  1 12:25:38.382: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135344769s
Jan  1 12:25:40.410: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162990042s
Jan  1 12:25:42.447: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200141258s
STEP: Saw pod success
Jan  1 12:25:42.447: INFO: Pod "downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:25:42.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:25:42.623: INFO: Waiting for pod downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:25:42.636: INFO: Pod downwardapi-volume-cdc29cde-2c91-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:25:42.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6jwl2" for this suite.
Jan  1 12:25:48.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:25:49.175: INFO: namespace: e2e-tests-projected-6jwl2, resource: bindings, ignored listing per whitelist
Jan  1 12:25:49.195: INFO: namespace e2e-tests-projected-6jwl2 deletion completed in 6.545639195s

• [SLOW TEST:17.182 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:25:49.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:25:49.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  1 12:25:49.564: INFO: stderr: ""
Jan  1 12:25:49.565: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:25:49.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fzlwm" for this suite.
Jan  1 12:25:55.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:25:55.896: INFO: namespace: e2e-tests-kubectl-fzlwm, resource: bindings, ignored listing per whitelist
Jan  1 12:25:55.901: INFO: namespace e2e-tests-kubectl-fzlwm deletion completed in 6.321841652s

• [SLOW TEST:6.707 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:25:55.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:25:56.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-g9qjd" to be "success or failure"
Jan  1 12:25:56.056: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232343ms
Jan  1 12:25:58.062: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014486943s
Jan  1 12:26:00.085: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037703949s
Jan  1 12:26:02.159: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111741583s
Jan  1 12:26:04.190: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14217133s
Jan  1 12:26:06.210: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162042499s
STEP: Saw pod success
Jan  1 12:26:06.210: INFO: Pod "downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:26:06.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:26:06.307: INFO: Waiting for pod downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:26:06.375: INFO: Pod downwardapi-volume-dbf3ae43-2c91-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:26:06.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g9qjd" for this suite.
Jan  1 12:26:12.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:26:13.437: INFO: namespace: e2e-tests-downward-api-g9qjd, resource: bindings, ignored listing per whitelist
Jan  1 12:26:13.446: INFO: namespace e2e-tests-downward-api-g9qjd deletion completed in 7.065479762s

• [SLOW TEST:17.544 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:26:13.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-pjtj
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 12:26:13.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pjtj" in namespace "e2e-tests-subpath-c5m55" to be "success or failure"
Jan  1 12:26:13.758: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 26.331931ms
Jan  1 12:26:15.779: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047232197s
Jan  1 12:26:17.799: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067238594s
Jan  1 12:26:20.126: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393535746s
Jan  1 12:26:22.167: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.434422263s
Jan  1 12:26:24.214: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.481690005s
Jan  1 12:26:26.235: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.502625197s
Jan  1 12:26:28.646: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.914174403s
Jan  1 12:26:31.055: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.322544399s
Jan  1 12:26:33.099: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 19.366578209s
Jan  1 12:26:35.139: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 21.407063577s
Jan  1 12:26:37.160: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 23.427465605s
Jan  1 12:26:39.181: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 25.448822233s
Jan  1 12:26:41.206: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 27.474024548s
Jan  1 12:26:43.257: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 29.525233542s
Jan  1 12:26:45.277: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 31.544662087s
Jan  1 12:26:47.301: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 33.568573277s
Jan  1 12:26:49.320: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Running", Reason="", readiness=false. Elapsed: 35.587519896s
Jan  1 12:26:51.341: INFO: Pod "pod-subpath-test-secret-pjtj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.608758828s
STEP: Saw pod success
Jan  1 12:26:51.341: INFO: Pod "pod-subpath-test-secret-pjtj" satisfied condition "success or failure"
Jan  1 12:26:51.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-pjtj container test-container-subpath-secret-pjtj: 
STEP: delete the pod
Jan  1 12:26:51.622: INFO: Waiting for pod pod-subpath-test-secret-pjtj to disappear
Jan  1 12:26:51.638: INFO: Pod pod-subpath-test-secret-pjtj no longer exists
STEP: Deleting pod pod-subpath-test-secret-pjtj
Jan  1 12:26:51.638: INFO: Deleting pod "pod-subpath-test-secret-pjtj" in namespace "e2e-tests-subpath-c5m55"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:26:51.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-c5m55" for this suite.
Jan  1 12:26:59.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:26:59.890: INFO: namespace: e2e-tests-subpath-c5m55, resource: bindings, ignored listing per whitelist
Jan  1 12:27:00.100: INFO: namespace e2e-tests-subpath-c5m55 deletion completed in 8.434596448s

• [SLOW TEST:46.653 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:27:00.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  1 12:27:00.291: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 12:27:00.304: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 12:27:00.315: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  1 12:27:00.326: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 12:27:00.326: INFO: 	Container coredns ready: true, restart count 0
Jan  1 12:27:00.326: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:27:00.326: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:27:00.327: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:27:00.327: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 12:27:00.327: INFO: 	Container coredns ready: true, restart count 0
Jan  1 12:27:00.327: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  1 12:27:00.327: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 12:27:00.327: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:27:00.327: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  1 12:27:00.327: INFO: 	Container weave ready: true, restart count 0
Jan  1 12:27:00.327: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e5c2f94b5c7b4c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:27:01.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-srp9x" for this suite.
Jan  1 12:27:07.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:27:07.556: INFO: namespace: e2e-tests-sched-pred-srp9x, resource: bindings, ignored listing per whitelist
Jan  1 12:27:07.661: INFO: namespace e2e-tests-sched-pred-srp9x deletion completed in 6.206721007s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.561 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:27:07.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-2gkk9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2gkk9 to expose endpoints map[]
Jan  1 12:27:08.084: INFO: Get endpoints failed (21.182017ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  1 12:27:09.106: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2gkk9 exposes endpoints map[] (1.043404042s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-2gkk9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2gkk9 to expose endpoints map[pod1:[80]]
Jan  1 12:27:13.657: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.519250717s elapsed, will retry)
Jan  1 12:27:18.839: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2gkk9 exposes endpoints map[pod1:[80]] (9.700895822s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-2gkk9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2gkk9 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  1 12:27:23.347: INFO: Unexpected endpoints: found map[0784d4db-2c92-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.497255969s elapsed, will retry)
Jan  1 12:27:28.677: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2gkk9 exposes endpoints map[pod1:[80] pod2:[80]] (9.827787266s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-2gkk9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2gkk9 to expose endpoints map[pod2:[80]]
Jan  1 12:27:29.739: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2gkk9 exposes endpoints map[pod2:[80]] (1.044367245s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-2gkk9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2gkk9 to expose endpoints map[]
Jan  1 12:27:31.280: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2gkk9 exposes endpoints map[] (1.51602966s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:27:31.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-2gkk9" for this suite.
Jan  1 12:27:53.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:27:53.566: INFO: namespace: e2e-tests-services-2gkk9, resource: bindings, ignored listing per whitelist
Jan  1 12:27:53.626: INFO: namespace e2e-tests-services-2gkk9 deletion completed in 22.192650951s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.965 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:27:53.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-5qf8
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 12:27:54.017: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5qf8" in namespace "e2e-tests-subpath-4xlq7" to be "success or failure"
Jan  1 12:27:54.128: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 110.576442ms
Jan  1 12:27:56.325: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307954499s
Jan  1 12:27:58.345: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327399729s
Jan  1 12:28:00.504: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486334237s
Jan  1 12:28:02.552: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534244702s
Jan  1 12:28:04.566: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.54876521s
Jan  1 12:28:06.577: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.559218648s
Jan  1 12:28:08.593: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.575856922s
Jan  1 12:28:10.648: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 16.630081266s
Jan  1 12:28:12.711: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 18.693917408s
Jan  1 12:28:14.733: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 20.715531441s
Jan  1 12:28:16.751: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 22.733779383s
Jan  1 12:28:18.769: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 24.751639293s
Jan  1 12:28:20.791: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 26.773848927s
Jan  1 12:28:22.808: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 28.790587271s
Jan  1 12:28:24.831: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 30.813260362s
Jan  1 12:28:26.857: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Running", Reason="", readiness=false. Elapsed: 32.83994289s
Jan  1 12:28:28.889: INFO: Pod "pod-subpath-test-configmap-5qf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.871614625s
STEP: Saw pod success
Jan  1 12:28:28.889: INFO: Pod "pod-subpath-test-configmap-5qf8" satisfied condition "success or failure"
Jan  1 12:28:28.903: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-5qf8 container test-container-subpath-configmap-5qf8: 
STEP: delete the pod
Jan  1 12:28:29.091: INFO: Waiting for pod pod-subpath-test-configmap-5qf8 to disappear
Jan  1 12:28:29.117: INFO: Pod pod-subpath-test-configmap-5qf8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5qf8
Jan  1 12:28:29.117: INFO: Deleting pod "pod-subpath-test-configmap-5qf8" in namespace "e2e-tests-subpath-4xlq7"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:28:29.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-4xlq7" for this suite.
Jan  1 12:28:37.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:28:37.422: INFO: namespace: e2e-tests-subpath-4xlq7, resource: bindings, ignored listing per whitelist
Jan  1 12:28:37.422: INFO: namespace e2e-tests-subpath-4xlq7 deletion completed in 8.253900969s

• [SLOW TEST:43.795 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:28:37.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:28:37.748: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:28:38.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-nbxzt" for this suite.
Jan  1 12:28:45.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:28:45.224: INFO: namespace: e2e-tests-custom-resource-definition-nbxzt, resource: bindings, ignored listing per whitelist
Jan  1 12:28:45.250: INFO: namespace e2e-tests-custom-resource-definition-nbxzt deletion completed in 6.268762532s

• [SLOW TEST:7.827 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:28:45.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  1 12:28:45.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:47.912: INFO: stderr: ""
Jan  1 12:28:47.912: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 12:28:47.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:48.243: INFO: stderr: ""
Jan  1 12:28:48.244: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-9bw9n "
Jan  1 12:28:48.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:48.394: INFO: stderr: ""
Jan  1 12:28:48.394: INFO: stdout: ""
Jan  1 12:28:48.395: INFO: update-demo-nautilus-6r72g is created but not running
Jan  1 12:28:53.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:53.603: INFO: stderr: ""
Jan  1 12:28:53.603: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-9bw9n "
Jan  1 12:28:53.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:53.751: INFO: stderr: ""
Jan  1 12:28:53.751: INFO: stdout: ""
Jan  1 12:28:53.751: INFO: update-demo-nautilus-6r72g is created but not running
Jan  1 12:28:58.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:58.934: INFO: stderr: ""
Jan  1 12:28:58.934: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-9bw9n "
Jan  1 12:28:58.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:28:59.090: INFO: stderr: ""
Jan  1 12:28:59.091: INFO: stdout: ""
Jan  1 12:28:59.091: INFO: update-demo-nautilus-6r72g is created but not running
Jan  1 12:29:04.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:04.306: INFO: stderr: ""
Jan  1 12:29:04.306: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-9bw9n "
Jan  1 12:29:04.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:04.424: INFO: stderr: ""
Jan  1 12:29:04.424: INFO: stdout: "true"
Jan  1 12:29:04.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:04.614: INFO: stderr: ""
Jan  1 12:29:04.614: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:04.614: INFO: validating pod update-demo-nautilus-6r72g
Jan  1 12:29:04.668: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:04.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:04.668: INFO: update-demo-nautilus-6r72g is verified up and running
Jan  1 12:29:04.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bw9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:04.858: INFO: stderr: ""
Jan  1 12:29:04.858: INFO: stdout: "true"
Jan  1 12:29:04.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bw9n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:04.978: INFO: stderr: ""
Jan  1 12:29:04.978: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:04.978: INFO: validating pod update-demo-nautilus-9bw9n
Jan  1 12:29:04.990: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:04.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:04.990: INFO: update-demo-nautilus-9bw9n is verified up and running
STEP: scaling down the replication controller
Jan  1 12:29:04.999: INFO: scanned /root for discovery docs: 
Jan  1 12:29:04.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:06.251: INFO: stderr: ""
Jan  1 12:29:06.251: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 12:29:06.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:06.455: INFO: stderr: ""
Jan  1 12:29:06.455: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-9bw9n "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  1 12:29:11.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:11.670: INFO: stderr: ""
Jan  1 12:29:11.671: INFO: stdout: "update-demo-nautilus-6r72g "
Jan  1 12:29:11.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:11.841: INFO: stderr: ""
Jan  1 12:29:11.841: INFO: stdout: "true"
Jan  1 12:29:11.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:11.949: INFO: stderr: ""
Jan  1 12:29:11.949: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:11.949: INFO: validating pod update-demo-nautilus-6r72g
Jan  1 12:29:11.959: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:11.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:11.959: INFO: update-demo-nautilus-6r72g is verified up and running
STEP: scaling up the replication controller
Jan  1 12:29:11.963: INFO: scanned /root for discovery docs: 
Jan  1 12:29:11.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:13.797: INFO: stderr: ""
Jan  1 12:29:13.797: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 12:29:13.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:14.221: INFO: stderr: ""
Jan  1 12:29:14.221: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-vvkvx "
Jan  1 12:29:14.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:14.388: INFO: stderr: ""
Jan  1 12:29:14.389: INFO: stdout: "true"
Jan  1 12:29:14.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:14.609: INFO: stderr: ""
Jan  1 12:29:14.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:14.609: INFO: validating pod update-demo-nautilus-6r72g
Jan  1 12:29:14.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:14.667: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:14.667: INFO: update-demo-nautilus-6r72g is verified up and running
Jan  1 12:29:14.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvkvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:14.779: INFO: stderr: ""
Jan  1 12:29:14.779: INFO: stdout: ""
Jan  1 12:29:14.779: INFO: update-demo-nautilus-vvkvx is created but not running
Jan  1 12:29:19.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.137: INFO: stderr: ""
Jan  1 12:29:20.137: INFO: stdout: "update-demo-nautilus-6r72g update-demo-nautilus-vvkvx "
Jan  1 12:29:20.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.290: INFO: stderr: ""
Jan  1 12:29:20.293: INFO: stdout: "true"
Jan  1 12:29:20.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r72g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.423: INFO: stderr: ""
Jan  1 12:29:20.423: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:20.423: INFO: validating pod update-demo-nautilus-6r72g
Jan  1 12:29:20.445: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:20.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:20.445: INFO: update-demo-nautilus-6r72g is verified up and running
Jan  1 12:29:20.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvkvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.580: INFO: stderr: ""
Jan  1 12:29:20.580: INFO: stdout: "true"
Jan  1 12:29:20.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vvkvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.697: INFO: stderr: ""
Jan  1 12:29:20.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 12:29:20.697: INFO: validating pod update-demo-nautilus-vvkvx
Jan  1 12:29:20.709: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 12:29:20.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 12:29:20.709: INFO: update-demo-nautilus-vvkvx is verified up and running
STEP: using delete to clean up resources
Jan  1 12:29:20.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:20.819: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:29:20.819: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  1 12:29:20.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-szjf4'
Jan  1 12:29:21.047: INFO: stderr: "No resources found.\n"
Jan  1 12:29:21.047: INFO: stdout: ""
Jan  1 12:29:21.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-szjf4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 12:29:21.203: INFO: stderr: ""
Jan  1 12:29:21.204: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:29:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-szjf4" for this suite.
Jan  1 12:29:45.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:29:45.448: INFO: namespace: e2e-tests-kubectl-szjf4, resource: bindings, ignored listing per whitelist
Jan  1 12:29:45.527: INFO: namespace e2e-tests-kubectl-szjf4 deletion completed in 24.310128332s

• [SLOW TEST:60.276 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:29:45.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-64e36bed-2c92-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:29:45.794: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-t9467" to be "success or failure"
Jan  1 12:29:45.902: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.816959ms
Jan  1 12:29:48.032: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237850093s
Jan  1 12:29:50.054: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25968628s
Jan  1 12:29:52.375: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581001072s
Jan  1 12:29:54.397: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602755166s
Jan  1 12:29:56.428: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.634130436s
STEP: Saw pod success
Jan  1 12:29:56.429: INFO: Pod "pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:29:56.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 12:29:56.736: INFO: Waiting for pod pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:29:56.752: INFO: Pod pod-projected-configmaps-64e52bf8-2c92-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:29:56.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t9467" for this suite.
Jan  1 12:30:04.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:30:04.883: INFO: namespace: e2e-tests-projected-t9467, resource: bindings, ignored listing per whitelist
Jan  1 12:30:04.921: INFO: namespace e2e-tests-projected-t9467 deletion completed in 8.160526513s

• [SLOW TEST:19.393 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:30:04.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  1 12:30:05.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7xztz,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xztz/configmaps/e2e-watch-test-watch-closed,UID:706dcc6d-2c92-11ea-a994-fa163e34d433,ResourceVersion:16796713,Generation:0,CreationTimestamp:2020-01-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 12:30:05.204: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7xztz,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xztz/configmaps/e2e-watch-test-watch-closed,UID:706dcc6d-2c92-11ea-a994-fa163e34d433,ResourceVersion:16796714,Generation:0,CreationTimestamp:2020-01-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  1 12:30:05.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7xztz,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xztz/configmaps/e2e-watch-test-watch-closed,UID:706dcc6d-2c92-11ea-a994-fa163e34d433,ResourceVersion:16796715,Generation:0,CreationTimestamp:2020-01-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 12:30:05.256: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7xztz,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xztz/configmaps/e2e-watch-test-watch-closed,UID:706dcc6d-2c92-11ea-a994-fa163e34d433,ResourceVersion:16796716,Generation:0,CreationTimestamp:2020-01-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:30:05.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7xztz" for this suite.
Jan  1 12:30:11.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:30:11.466: INFO: namespace: e2e-tests-watch-7xztz, resource: bindings, ignored listing per whitelist
Jan  1 12:30:11.494: INFO: namespace e2e-tests-watch-7xztz deletion completed in 6.226463797s

• [SLOW TEST:6.572 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:30:11.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-f29hr
Jan  1 12:30:21.823: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-f29hr
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 12:30:21.841: INFO: Initial restart count of pod liveness-http is 0
Jan  1 12:30:40.018: INFO: Restart count of pod e2e-tests-container-probe-f29hr/liveness-http is now 1 (18.176985788s elapsed)
Jan  1 12:30:58.440: INFO: Restart count of pod e2e-tests-container-probe-f29hr/liveness-http is now 2 (36.59916602s elapsed)
Jan  1 12:31:21.582: INFO: Restart count of pod e2e-tests-container-probe-f29hr/liveness-http is now 3 (59.740609511s elapsed)
Jan  1 12:31:39.777: INFO: Restart count of pod e2e-tests-container-probe-f29hr/liveness-http is now 4 (1m17.935649307s elapsed)
Jan  1 12:32:42.614: INFO: Restart count of pod e2e-tests-container-probe-f29hr/liveness-http is now 5 (2m20.772993959s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:32:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-f29hr" for this suite.
Jan  1 12:32:48.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:32:48.941: INFO: namespace: e2e-tests-container-probe-f29hr, resource: bindings, ignored listing per whitelist
Jan  1 12:32:49.007: INFO: namespace e2e-tests-container-probe-f29hr deletion completed in 6.20227036s

• [SLOW TEST:157.512 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:32:49.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-d237a9e0-2c92-11ea-8bf6-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-d237a986-2c92-11ea-8bf6-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  1 12:32:49.396: INFO: Waiting up to 5m0s for pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-qcrln" to be "success or failure"
Jan  1 12:32:49.413: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.792681ms
Jan  1 12:32:51.526: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129402754s
Jan  1 12:32:53.543: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146577041s
Jan  1 12:32:55.590: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193542423s
Jan  1 12:32:57.608: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211873011s
Jan  1 12:32:59.620: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224307836s
STEP: Saw pod success
Jan  1 12:32:59.621: INFO: Pod "projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:32:59.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  1 12:33:00.480: INFO: Waiting for pod projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:33:00.497: INFO: Pod projected-volume-d237a70f-2c92-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:33:00.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qcrln" for this suite.
Jan  1 12:33:06.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:33:06.867: INFO: namespace: e2e-tests-projected-qcrln, resource: bindings, ignored listing per whitelist
Jan  1 12:33:06.915: INFO: namespace e2e-tests-projected-qcrln deletion completed in 6.401485071s

• [SLOW TEST:17.907 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:33:06.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  1 12:33:07.165: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797019,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 12:33:07.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797020,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  1 12:33:07.167: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797021,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  1 12:33:17.335: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797035,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 12:33:17.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797036,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  1 12:33:17.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-xld9n,SelfLink:/api/v1/namespaces/e2e-tests-watch-xld9n/configmaps/e2e-watch-test-label-changed,UID:dce92395-2c92-11ea-a994-fa163e34d433,ResourceVersion:16797037,Generation:0,CreationTimestamp:2020-01-01 12:33:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:33:17.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-xld9n" for this suite.
Jan  1 12:33:23.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:33:23.666: INFO: namespace: e2e-tests-watch-xld9n, resource: bindings, ignored listing per whitelist
Jan  1 12:33:23.666: INFO: namespace e2e-tests-watch-xld9n deletion completed in 6.32116749s

• [SLOW TEST:16.748 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:33:23.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:33:24.097: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 12:33:24.140: INFO: Number of nodes with available pods: 0
Jan  1 12:33:24.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:25.718: INFO: Number of nodes with available pods: 0
Jan  1 12:33:25.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:26.174: INFO: Number of nodes with available pods: 0
Jan  1 12:33:26.174: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:27.222: INFO: Number of nodes with available pods: 0
Jan  1 12:33:27.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:28.182: INFO: Number of nodes with available pods: 0
Jan  1 12:33:28.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:29.421: INFO: Number of nodes with available pods: 0
Jan  1 12:33:29.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:30.267: INFO: Number of nodes with available pods: 0
Jan  1 12:33:30.267: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:31.317: INFO: Number of nodes with available pods: 0
Jan  1 12:33:31.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:32.162: INFO: Number of nodes with available pods: 0
Jan  1 12:33:32.162: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:33.237: INFO: Number of nodes with available pods: 1
Jan  1 12:33:33.237: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  1 12:33:33.301: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:34.498: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:35.491: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:36.506: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:37.485: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:38.538: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:39.487: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:40.501: INFO: Wrong image for pod: daemon-set-jxvhc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 12:33:40.502: INFO: Pod daemon-set-jxvhc is not available
Jan  1 12:33:41.492: INFO: Pod daemon-set-bdt4m is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  1 12:33:41.517: INFO: Number of nodes with available pods: 0
Jan  1 12:33:41.517: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:42.802: INFO: Number of nodes with available pods: 0
Jan  1 12:33:42.802: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:43.561: INFO: Number of nodes with available pods: 0
Jan  1 12:33:43.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:44.579: INFO: Number of nodes with available pods: 0
Jan  1 12:33:44.579: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:45.531: INFO: Number of nodes with available pods: 0
Jan  1 12:33:45.531: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:48.045: INFO: Number of nodes with available pods: 0
Jan  1 12:33:48.046: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:48.617: INFO: Number of nodes with available pods: 0
Jan  1 12:33:48.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:49.544: INFO: Number of nodes with available pods: 0
Jan  1 12:33:49.544: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:33:50.579: INFO: Number of nodes with available pods: 1
Jan  1 12:33:50.579: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kvk6m, will wait for the garbage collector to delete the pods
Jan  1 12:33:50.753: INFO: Deleting DaemonSet.extensions daemon-set took: 18.45681ms
Jan  1 12:33:50.854: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.769377ms
Jan  1 12:34:02.610: INFO: Number of nodes with available pods: 0
Jan  1 12:34:02.611: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 12:34:02.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kvk6m/daemonsets","resourceVersion":"16797141"},"items":null}

Jan  1 12:34:02.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kvk6m/pods","resourceVersion":"16797141"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:34:02.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-kvk6m" for this suite.
Jan  1 12:34:10.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:34:10.874: INFO: namespace: e2e-tests-daemonsets-kvk6m, resource: bindings, ignored listing per whitelist
Jan  1 12:34:10.968: INFO: namespace e2e-tests-daemonsets-kvk6m deletion completed in 8.219383907s

• [SLOW TEST:47.301 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:34:10.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  1 12:34:11.243: INFO: Waiting up to 5m0s for pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-downward-api-grdkp" to be "success or failure"
Jan  1 12:34:11.275: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.158689ms
Jan  1 12:34:14.017: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.774156023s
Jan  1 12:34:16.027: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.784095938s
Jan  1 12:34:18.467: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.224072096s
Jan  1 12:34:20.517: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.27474944s
Jan  1 12:34:22.550: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.307754765s
Jan  1 12:34:24.588: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.345634784s
STEP: Saw pod success
Jan  1 12:34:24.589: INFO: Pod "downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:34:24.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  1 12:34:24.792: INFO: Waiting for pod downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:34:24.863: INFO: Pod downward-api-031c61ad-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:34:24.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-grdkp" for this suite.
Jan  1 12:34:30.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:34:31.116: INFO: namespace: e2e-tests-downward-api-grdkp, resource: bindings, ignored listing per whitelist
Jan  1 12:34:31.137: INFO: namespace e2e-tests-downward-api-grdkp deletion completed in 6.248382622s

• [SLOW TEST:20.169 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:34:31.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:34:31.317: INFO: Creating deployment "test-recreate-deployment"
Jan  1 12:34:31.336: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  1 12:34:31.354: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  1 12:34:33.708: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  1 12:34:33.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 12:34:35.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 12:34:37.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 12:34:39.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713478871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 12:34:41.737: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  1 12:34:41.764: INFO: Updating deployment test-recreate-deployment
Jan  1 12:34:41.765: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 12:34:42.784: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6qv2z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qv2z/deployments/test-recreate-deployment,UID:0f1734d8-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797273,Generation:2,CreationTimestamp:2020-01-01 12:34:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-01 12:34:42 +0000 UTC 2020-01-01 12:34:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-01 12:34:42 +0000 UTC 2020-01-01 12:34:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  1 12:34:42.858: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6qv2z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qv2z/replicasets/test-recreate-deployment-589c4bfd,UID:15998a14-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797272,Generation:1,CreationTimestamp:2020-01-01 12:34:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0f1734d8-2c93-11ea-a994-fa163e34d433 0xc000d6679f 0xc000d667b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 12:34:42.858: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  1 12:34:42.860: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6qv2z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6qv2z/replicasets/test-recreate-deployment-5bf7f65dc,UID:0f1b89f5-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797261,Generation:2,CreationTimestamp:2020-01-01 12:34:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0f1734d8-2c93-11ea-a994-fa163e34d433 0xc000d66870 0xc000d66871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 12:34:42.937: INFO: Pod "test-recreate-deployment-589c4bfd-hvlk8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-hvlk8,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6qv2z,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6qv2z/pods/test-recreate-deployment-589c4bfd-hvlk8,UID:159e9ded-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797269,Generation:0,CreationTimestamp:2020-01-01 12:34:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 15998a14-2c93-11ea-a994-fa163e34d433 0xc000d6775f 0xc000d67770}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vjwtw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vjwtw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vjwtw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d67860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d67880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:34:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:34:42.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6qv2z" for this suite.
Jan  1 12:34:54.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:34:55.779: INFO: namespace: e2e-tests-deployment-6qv2z, resource: bindings, ignored listing per whitelist
Jan  1 12:34:56.013: INFO: namespace e2e-tests-deployment-6qv2z deletion completed in 12.284918362s

• [SLOW TEST:24.875 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:34:56.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zbgwj
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-zbgwj
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-zbgwj
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-zbgwj
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-zbgwj
Jan  1 12:35:08.452: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zbgwj, name: ss-0, uid: 2323c3fe-2c93-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan  1 12:35:12.482: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zbgwj, name: ss-0, uid: 2323c3fe-2c93-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  1 12:35:12.623: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zbgwj, name: ss-0, uid: 2323c3fe-2c93-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  1 12:35:12.639: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-zbgwj
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-zbgwj
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-zbgwj and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  1 12:35:25.106: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zbgwj
Jan  1 12:35:25.116: INFO: Scaling statefulset ss to 0
Jan  1 12:35:45.345: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 12:35:45.359: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:35:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zbgwj" for this suite.
Jan  1 12:35:53.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:35:53.576: INFO: namespace: e2e-tests-statefulset-zbgwj, resource: bindings, ignored listing per whitelist
Jan  1 12:35:53.685: INFO: namespace e2e-tests-statefulset-zbgwj deletion completed in 8.27282595s

• [SLOW TEST:57.672 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:35:53.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  1 12:35:54.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-lcdjf,SelfLink:/api/v1/namespaces/e2e-tests-watch-lcdjf/configmaps/e2e-watch-test-resource-version,UID:40576318-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797526,Generation:0,CreationTimestamp:2020-01-01 12:35:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 12:35:54.118: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-lcdjf,SelfLink:/api/v1/namespaces/e2e-tests-watch-lcdjf/configmaps/e2e-watch-test-resource-version,UID:40576318-2c93-11ea-a994-fa163e34d433,ResourceVersion:16797527,Generation:0,CreationTimestamp:2020-01-01 12:35:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:35:54.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-lcdjf" for this suite.
Jan  1 12:36:00.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:36:00.254: INFO: namespace: e2e-tests-watch-lcdjf, resource: bindings, ignored listing per whitelist
Jan  1 12:36:00.388: INFO: namespace e2e-tests-watch-lcdjf deletion completed in 6.226592899s

• [SLOW TEST:6.703 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:36:00.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-445f129b-2c93-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:36:00.830: INFO: Waiting up to 5m0s for pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-jps2j" to be "success or failure"
Jan  1 12:36:00.843: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.923022ms
Jan  1 12:36:03.248: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417432936s
Jan  1 12:36:05.284: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453404938s
Jan  1 12:36:07.449: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.619085978s
Jan  1 12:36:09.465: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634741574s
Jan  1 12:36:11.478: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.6480731s
STEP: Saw pod success
Jan  1 12:36:11.478: INFO: Pod "pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:36:11.484: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 12:36:12.517: INFO: Waiting for pod pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:36:12.538: INFO: Pod pod-secrets-4462220a-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:36:12.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jps2j" for this suite.
Jan  1 12:36:20.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:36:20.840: INFO: namespace: e2e-tests-secrets-jps2j, resource: bindings, ignored listing per whitelist
Jan  1 12:36:20.868: INFO: namespace e2e-tests-secrets-jps2j deletion completed in 8.30222865s

• [SLOW TEST:20.479 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:36:20.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:36:21.093: INFO: Waiting up to 5m0s for pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-c95s9" to be "success or failure"
Jan  1 12:36:21.121: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.615639ms
Jan  1 12:36:23.136: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043071677s
Jan  1 12:36:25.193: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099936757s
Jan  1 12:36:27.550: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456589413s
Jan  1 12:36:29.571: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477691303s
Jan  1 12:36:31.586: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.492506976s
STEP: Saw pod success
Jan  1 12:36:31.586: INFO: Pod "downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:36:31.594: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:36:31.696: INFO: Waiting for pod downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:36:32.807: INFO: Pod downwardapi-volume-508389cd-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:36:32.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c95s9" for this suite.
Jan  1 12:36:39.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:36:39.218: INFO: namespace: e2e-tests-projected-c95s9, resource: bindings, ignored listing per whitelist
Jan  1 12:36:39.294: INFO: namespace e2e-tests-projected-c95s9 deletion completed in 6.458463938s

• [SLOW TEST:18.425 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:36:39.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-5b8753de-2c93-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:36:39.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-9j5mt" to be "success or failure"
Jan  1 12:36:39.814: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 206.413578ms
Jan  1 12:36:41.837: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229303519s
Jan  1 12:36:43.873: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265875452s
Jan  1 12:36:46.304: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.696995967s
Jan  1 12:36:48.322: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714165924s
Jan  1 12:36:50.349: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.741185669s
Jan  1 12:36:52.445: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.837271499s
STEP: Saw pod success
Jan  1 12:36:52.445: INFO: Pod "pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:36:52.484: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 12:36:52.742: INFO: Waiting for pod pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:36:52.774: INFO: Pod pod-projected-configmaps-5b8a3f7b-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:36:52.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9j5mt" for this suite.
Jan  1 12:36:58.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:36:58.971: INFO: namespace: e2e-tests-projected-9j5mt, resource: bindings, ignored listing per whitelist
Jan  1 12:36:59.055: INFO: namespace e2e-tests-projected-9j5mt deletion completed in 6.268728825s

• [SLOW TEST:19.761 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:36:59.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  1 12:37:10.041: INFO: Successfully updated pod "annotationupdate67557741-2c93-11ea-8bf6-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:37:12.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-88692" for this suite.
Jan  1 12:37:36.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:37:36.478: INFO: namespace: e2e-tests-downward-api-88692, resource: bindings, ignored listing per whitelist
Jan  1 12:37:36.605: INFO: namespace e2e-tests-downward-api-88692 deletion completed in 24.423616518s

• [SLOW TEST:37.549 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:37:36.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 12:37:36.841: INFO: Waiting up to 5m0s for pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-6c8dm" to be "success or failure"
Jan  1 12:37:36.881: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.579466ms
Jan  1 12:37:39.387: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.545146547s
Jan  1 12:37:41.418: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576978711s
Jan  1 12:37:43.647: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805555373s
Jan  1 12:37:45.676: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834835282s
Jan  1 12:37:47.692: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.850198101s
STEP: Saw pod success
Jan  1 12:37:47.692: INFO: Pod "pod-7da85c2f-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:37:47.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7da85c2f-2c93-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:37:48.619: INFO: Waiting for pod pod-7da85c2f-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:37:48.843: INFO: Pod pod-7da85c2f-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:37:48.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6c8dm" for this suite.
Jan  1 12:37:54.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:37:55.144: INFO: namespace: e2e-tests-emptydir-6c8dm, resource: bindings, ignored listing per whitelist
Jan  1 12:37:55.203: INFO: namespace e2e-tests-emptydir-6c8dm deletion completed in 6.338208805s

• [SLOW TEST:18.598 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:37:55.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0101 12:38:08.325284       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 12:38:08.325: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:38:08.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xs8qq" for this suite.
Jan  1 12:38:33.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:38:33.855: INFO: namespace: e2e-tests-gc-xs8qq, resource: bindings, ignored listing per whitelist
Jan  1 12:38:33.908: INFO: namespace e2e-tests-gc-xs8qq deletion completed in 25.578250205s

• [SLOW TEST:38.704 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:38:33.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  1 12:38:34.371: INFO: Waiting up to 5m0s for pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-gnwgg" to be "success or failure"
Jan  1 12:38:34.557: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 185.95322ms
Jan  1 12:38:36.604: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232501635s
Jan  1 12:38:38.626: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254087337s
Jan  1 12:38:41.071: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.699063809s
Jan  1 12:38:43.088: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716771974s
Jan  1 12:38:45.263: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.891285888s
Jan  1 12:38:47.439: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.067415338s
STEP: Saw pod success
Jan  1 12:38:47.439: INFO: Pod "pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:38:47.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:38:47.783: INFO: Waiting for pod pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:38:47.808: INFO: Pod pod-9fec6c7c-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:38:47.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gnwgg" for this suite.
Jan  1 12:38:53.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:38:53.978: INFO: namespace: e2e-tests-emptydir-gnwgg, resource: bindings, ignored listing per whitelist
Jan  1 12:38:54.128: INFO: namespace e2e-tests-emptydir-gnwgg deletion completed in 6.307079968s

• [SLOW TEST:20.217 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:38:54.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:38:54.355: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  1 12:38:59.392: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 12:39:05.474: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 12:39:05.550: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-jb2fr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jb2fr/deployments/test-cleanup-deployment,UID:b28394c2-2c93-11ea-a994-fa163e34d433,ResourceVersion:16798034,Generation:1,CreationTimestamp:2020-01-01 12:39:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  1 12:39:05.560: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  1 12:39:05.560: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  1 12:39:05.561: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-jb2fr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jb2fr/replicasets/test-cleanup-controller,UID:abdb4d8d-2c93-11ea-a994-fa163e34d433,ResourceVersion:16798035,Generation:1,CreationTimestamp:2020-01-01 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b28394c2-2c93-11ea-a994-fa163e34d433 0xc000c43be7 0xc000c43be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 12:39:05.645: INFO: Pod "test-cleanup-controller-8rtmj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-8rtmj,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-jb2fr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jb2fr/pods/test-cleanup-controller-8rtmj,UID:abdfe475-2c93-11ea-a994-fa163e34d433,ResourceVersion:16798030,Generation:0,CreationTimestamp:2020-01-01 12:38:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller abdb4d8d-2c93-11ea-a994-fa163e34d433 0xc0016bb827 0xc0016bb828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8rx4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8rx4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8rx4n true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016bbd80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016bbda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:38:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:39:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:39:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:38:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-01 12:38:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:39:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fafcbecb8048aa6717e759a13a618fa1c72c122c63f5e0984506d66c6a5de7e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:39:05.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jb2fr" for this suite.
Jan  1 12:39:15.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:39:16.266: INFO: namespace: e2e-tests-deployment-jb2fr, resource: bindings, ignored listing per whitelist
Jan  1 12:39:16.372: INFO: namespace e2e-tests-deployment-jb2fr deletion completed in 10.696906916s

• [SLOW TEST:22.243 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:39:16.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-nshm
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 12:39:16.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nshm" in namespace "e2e-tests-subpath-dp96p" to be "success or failure"
Jan  1 12:39:16.771: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 32.95746ms
Jan  1 12:39:18.784: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045954425s
Jan  1 12:39:20.801: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063249826s
Jan  1 12:39:22.895: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156388836s
Jan  1 12:39:24.918: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1802513s
Jan  1 12:39:26.960: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221688547s
Jan  1 12:39:29.190: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.45212948s
Jan  1 12:39:31.203: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464918866s
Jan  1 12:39:33.224: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 16.486091559s
Jan  1 12:39:35.243: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 18.504568026s
Jan  1 12:39:37.262: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 20.524178974s
Jan  1 12:39:39.294: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 22.555927776s
Jan  1 12:39:41.312: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 24.573445936s
Jan  1 12:39:43.329: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 26.590478251s
Jan  1 12:39:45.346: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 28.607718862s
Jan  1 12:39:47.368: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 30.629807292s
Jan  1 12:39:49.737: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Running", Reason="", readiness=false. Elapsed: 32.999136381s
Jan  1 12:39:51.755: INFO: Pod "pod-subpath-test-configmap-nshm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.017248734s
STEP: Saw pod success
Jan  1 12:39:51.756: INFO: Pod "pod-subpath-test-configmap-nshm" satisfied condition "success or failure"
Jan  1 12:39:51.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-nshm container test-container-subpath-configmap-nshm: 
STEP: delete the pod
Jan  1 12:39:52.041: INFO: Waiting for pod pod-subpath-test-configmap-nshm to disappear
Jan  1 12:39:52.099: INFO: Pod pod-subpath-test-configmap-nshm no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nshm
Jan  1 12:39:52.099: INFO: Deleting pod "pod-subpath-test-configmap-nshm" in namespace "e2e-tests-subpath-dp96p"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:39:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dp96p" for this suite.
Jan  1 12:39:58.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:39:58.658: INFO: namespace: e2e-tests-subpath-dp96p, resource: bindings, ignored listing per whitelist
Jan  1 12:39:58.687: INFO: namespace e2e-tests-subpath-dp96p deletion completed in 6.393610328s

• [SLOW TEST:42.315 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:39:58.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:39:59.003: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  1 12:39:59.011: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-z7m7z/daemonsets","resourceVersion":"16798168"},"items":null}

Jan  1 12:39:59.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-z7m7z/pods","resourceVersion":"16798168"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:39:59.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-z7m7z" for this suite.
Jan  1 12:40:05.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:40:05.189: INFO: namespace: e2e-tests-daemonsets-z7m7z, resource: bindings, ignored listing per whitelist
Jan  1 12:40:05.270: INFO: namespace e2e-tests-daemonsets-z7m7z deletion completed in 6.147711417s

S [SKIPPING] [6.580 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  1 12:39:59.003: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:40:05.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  1 12:40:05.568: INFO: Waiting up to 5m0s for pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005" in namespace "e2e-tests-containers-gqf6l" to be "success or failure"
Jan  1 12:40:05.648: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 78.950827ms
Jan  1 12:40:07.723: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154004896s
Jan  1 12:40:09.748: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179290043s
Jan  1 12:40:11.969: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400183996s
Jan  1 12:40:14.003: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.434566411s
Jan  1 12:40:16.059: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.49042468s
STEP: Saw pod success
Jan  1 12:40:16.060: INFO: Pod "client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:40:16.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:40:16.929: INFO: Waiting for pod client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:40:16.946: INFO: Pod client-containers-d64cb03a-2c93-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:40:16.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gqf6l" for this suite.
Jan  1 12:40:23.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:40:23.381: INFO: namespace: e2e-tests-containers-gqf6l, resource: bindings, ignored listing per whitelist
Jan  1 12:40:23.397: INFO: namespace e2e-tests-containers-gqf6l deletion completed in 6.441430616s

• [SLOW TEST:18.127 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:40:23.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:40:23.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-bvh5j" for this suite.
Jan  1 12:40:29.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:40:29.777: INFO: namespace: e2e-tests-services-bvh5j, resource: bindings, ignored listing per whitelist
Jan  1 12:40:29.900: INFO: namespace e2e-tests-services-bvh5j deletion completed in 6.24173341s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.503 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:40:29.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 12:40:30.035: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:40:50.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vx6hc" for this suite.
Jan  1 12:41:15.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:41:15.206: INFO: namespace: e2e-tests-init-container-vx6hc, resource: bindings, ignored listing per whitelist
Jan  1 12:41:15.294: INFO: namespace e2e-tests-init-container-vx6hc deletion completed in 24.257246659s

• [SLOW TEST:45.393 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:41:15.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-0017d680-2c94-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:41:15.707: INFO: Waiting up to 5m0s for pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-6zht8" to be "success or failure"
Jan  1 12:41:15.725: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.185235ms
Jan  1 12:41:17.744: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036932531s
Jan  1 12:41:19.779: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071773533s
Jan  1 12:41:22.019: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312464592s
Jan  1 12:41:24.030: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322700501s
Jan  1 12:41:26.042: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33500901s
STEP: Saw pod success
Jan  1 12:41:26.042: INFO: Pod "pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:41:26.048: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 12:41:26.523: INFO: Waiting for pod pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:41:26.543: INFO: Pod pod-secrets-001aac02-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:41:26.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6zht8" for this suite.
Jan  1 12:41:32.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:41:33.017: INFO: namespace: e2e-tests-secrets-6zht8, resource: bindings, ignored listing per whitelist
Jan  1 12:41:33.024: INFO: namespace e2e-tests-secrets-6zht8 deletion completed in 6.448859634s

• [SLOW TEST:17.730 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:41:33.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  1 12:41:44.002: INFO: Successfully updated pod "annotationupdate0a94d2ba-2c94-11ea-8bf6-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:41:46.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rzd87" for this suite.
Jan  1 12:42:10.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:42:10.417: INFO: namespace: e2e-tests-projected-rzd87, resource: bindings, ignored listing per whitelist
Jan  1 12:42:10.470: INFO: namespace e2e-tests-projected-rzd87 deletion completed in 24.24290701s

• [SLOW TEST:37.445 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:42:10.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  1 12:42:23.952: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:42:25.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-xslnh" for this suite.
Jan  1 12:42:52.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:42:52.185: INFO: namespace: e2e-tests-replicaset-xslnh, resource: bindings, ignored listing per whitelist
Jan  1 12:42:52.186: INFO: namespace e2e-tests-replicaset-xslnh deletion completed in 26.465039768s

• [SLOW TEST:41.715 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:42:52.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:42:52.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-5xrxc" to be "success or failure"
Jan  1 12:42:52.455: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.902024ms
Jan  1 12:42:54.485: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054500039s
Jan  1 12:42:56.561: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130839997s
Jan  1 12:42:58.831: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40042321s
Jan  1 12:43:00.872: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.44191704s
Jan  1 12:43:02.888: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457206091s
Jan  1 12:43:05.396: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.965200204s
STEP: Saw pod success
Jan  1 12:43:05.396: INFO: Pod "downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:43:05.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:43:05.681: INFO: Waiting for pod downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:43:05.708: INFO: Pod downwardapi-volume-39c386fc-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:43:05.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5xrxc" for this suite.
Jan  1 12:43:11.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:43:11.943: INFO: namespace: e2e-tests-projected-5xrxc, resource: bindings, ignored listing per whitelist
Jan  1 12:43:12.076: INFO: namespace e2e-tests-projected-5xrxc deletion completed in 6.277283135s

• [SLOW TEST:19.890 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:43:12.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-45a17d1b-2c94-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:43:12.350: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-bnvmf" to be "success or failure"
Jan  1 12:43:12.367: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.254381ms
Jan  1 12:43:14.390: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039360552s
Jan  1 12:43:16.410: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059553516s
Jan  1 12:43:18.633: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282369175s
Jan  1 12:43:20.938: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.587811273s
Jan  1 12:43:22.953: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602686701s
STEP: Saw pod success
Jan  1 12:43:22.953: INFO: Pod "pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:43:22.960: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 12:43:23.686: INFO: Waiting for pod pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:43:23.853: INFO: Pod pod-projected-configmaps-45a342c0-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:43:23.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bnvmf" for this suite.
Jan  1 12:43:30.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:43:30.378: INFO: namespace: e2e-tests-projected-bnvmf, resource: bindings, ignored listing per whitelist
Jan  1 12:43:30.404: INFO: namespace e2e-tests-projected-bnvmf deletion completed in 6.522083796s

• [SLOW TEST:18.327 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:43:30.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  1 12:43:30.771: INFO: Waiting up to 5m0s for pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-xhltv" to be "success or failure"
Jan  1 12:43:30.806: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.254054ms
Jan  1 12:43:32.818: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047268246s
Jan  1 12:43:34.941: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169880844s
Jan  1 12:43:37.605: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.833924339s
Jan  1 12:43:39.622: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851180619s
Jan  1 12:43:41.641: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.869936883s
STEP: Saw pod success
Jan  1 12:43:41.641: INFO: Pod "pod-509e23f5-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:43:41.646: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-509e23f5-2c94-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:43:42.563: INFO: Waiting for pod pod-509e23f5-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:43:42.590: INFO: Pod pod-509e23f5-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:43:42.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xhltv" for this suite.
Jan  1 12:43:49.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:43:49.484: INFO: namespace: e2e-tests-emptydir-xhltv, resource: bindings, ignored listing per whitelist
Jan  1 12:43:49.562: INFO: namespace e2e-tests-emptydir-xhltv deletion completed in 6.355618376s

• [SLOW TEST:19.157 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:43:49.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:43:49.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  1 12:43:49.796: INFO: stderr: ""
Jan  1 12:43:49.796: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  1 12:43:49.809: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:43:49.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hnxng" for this suite.
Jan  1 12:43:55.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:43:56.092: INFO: namespace: e2e-tests-kubectl-hnxng, resource: bindings, ignored listing per whitelist
Jan  1 12:43:56.214: INFO: namespace e2e-tests-kubectl-hnxng deletion completed in 6.374360504s

S [SKIPPING] [6.652 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan  1 12:43:49.809: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:43:56.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5fe9880f-2c94-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 12:43:56.506: INFO: Waiting up to 5m0s for pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-8j296" to be "success or failure"
Jan  1 12:43:56.648: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 142.114426ms
Jan  1 12:43:58.697: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190578182s
Jan  1 12:44:00.730: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224208368s
Jan  1 12:44:03.648: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.142321899s
Jan  1 12:44:05.664: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.15764309s
Jan  1 12:44:07.678: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.172176465s
STEP: Saw pod success
Jan  1 12:44:07.678: INFO: Pod "pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:44:08.422: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 12:44:08.915: INFO: Waiting for pod pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:44:08.928: INFO: Pod pod-secrets-5ff18c4e-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:44:08.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8j296" for this suite.
Jan  1 12:44:14.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:44:15.177: INFO: namespace: e2e-tests-secrets-8j296, resource: bindings, ignored listing per whitelist
Jan  1 12:44:15.239: INFO: namespace e2e-tests-secrets-8j296 deletion completed in 6.305731712s

• [SLOW TEST:19.024 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:44:15.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  1 12:44:28.087: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6b433de6-2c94-11ea-8bf6-0242ac110005"
Jan  1 12:44:28.087: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6b433de6-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-pods-rmr5m" to be "terminated due to deadline exceeded"
Jan  1 12:44:28.210: INFO: Pod "pod-update-activedeadlineseconds-6b433de6-2c94-11ea-8bf6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 123.020293ms
Jan  1 12:44:30.248: INFO: Pod "pod-update-activedeadlineseconds-6b433de6-2c94-11ea-8bf6-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.160188755s
Jan  1 12:44:30.248: INFO: Pod "pod-update-activedeadlineseconds-6b433de6-2c94-11ea-8bf6-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:44:30.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rmr5m" for this suite.
Jan  1 12:44:36.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:44:36.535: INFO: namespace: e2e-tests-pods-rmr5m, resource: bindings, ignored listing per whitelist
Jan  1 12:44:36.649: INFO: namespace e2e-tests-pods-rmr5m deletion completed in 6.393756379s

• [SLOW TEST:21.408 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:44:36.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  1 12:44:36.879: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 12:44:36.890: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 12:44:36.893: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  1 12:44:36.911: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:44:36.911: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:44:36.911: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:44:36.911: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 12:44:36.911: INFO: 	Container coredns ready: true, restart count 0
Jan  1 12:44:36.911: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  1 12:44:36.911: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 12:44:36.911: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  1 12:44:36.911: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  1 12:44:36.911: INFO: 	Container weave ready: true, restart count 0
Jan  1 12:44:36.911: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 12:44:36.911: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  1 12:44:36.911: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  1 12:44:37.023: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-781e06e8-2c94-11ea-8bf6-0242ac110005.15e5c3ef50741d58], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-z4tjd/filler-pod-781e06e8-2c94-11ea-8bf6-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-781e06e8-2c94-11ea-8bf6-0242ac110005.15e5c3f07b3997ce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-781e06e8-2c94-11ea-8bf6-0242ac110005.15e5c3f135474acf], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-781e06e8-2c94-11ea-8bf6-0242ac110005.15e5c3f170d421f2], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e5c3f21eaef2fc], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:44:50.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-z4tjd" for this suite.
Jan  1 12:44:58.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:44:59.084: INFO: namespace: e2e-tests-sched-pred-z4tjd, resource: bindings, ignored listing per whitelist
Jan  1 12:44:59.200: INFO: namespace e2e-tests-sched-pred-z4tjd deletion completed in 8.978720056s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.551 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:44:59.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-857e4a89-2c94-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:44:59.483: INFO: Waiting up to 5m0s for pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-7glbd" to be "success or failure"
Jan  1 12:44:59.585: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.4454ms
Jan  1 12:45:01.620: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137316654s
Jan  1 12:45:03.634: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151151646s
Jan  1 12:45:05.675: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192466741s
Jan  1 12:45:07.696: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21364637s
Jan  1 12:45:09.713: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230641195s
STEP: Saw pod success
Jan  1 12:45:09.714: INFO: Pod "pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:45:09.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 12:45:09.830: INFO: Waiting for pod pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:45:09.840: INFO: Pod pod-configmaps-857f648b-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:45:09.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7glbd" for this suite.
Jan  1 12:45:15.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:45:16.072: INFO: namespace: e2e-tests-configmap-7glbd, resource: bindings, ignored listing per whitelist
Jan  1 12:45:16.172: INFO: namespace e2e-tests-configmap-7glbd deletion completed in 6.258964594s

• [SLOW TEST:16.971 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:45:16.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:45:26.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mxvzv" for this suite.
Jan  1 12:45:33.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:45:33.157: INFO: namespace: e2e-tests-emptydir-wrapper-mxvzv, resource: bindings, ignored listing per whitelist
Jan  1 12:45:33.296: INFO: namespace e2e-tests-emptydir-wrapper-mxvzv deletion completed in 6.24309564s

• [SLOW TEST:17.123 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:45:33.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  1 12:45:33.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-kw2zr" to be "success or failure"
Jan  1 12:45:33.574: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.931965ms
Jan  1 12:45:35.665: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141733345s
Jan  1 12:45:37.681: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158296023s
Jan  1 12:45:39.933: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409332147s
Jan  1 12:45:41.949: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.426228782s
Jan  1 12:45:43.983: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459477061s
STEP: Saw pod success
Jan  1 12:45:43.983: INFO: Pod "downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:45:44.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005 container client-container: 
STEP: delete the pod
Jan  1 12:45:44.192: INFO: Waiting for pod downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:45:44.202: INFO: Pod downwardapi-volume-99c981e5-2c94-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:45:44.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kw2zr" for this suite.
Jan  1 12:45:50.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:45:50.511: INFO: namespace: e2e-tests-projected-kw2zr, resource: bindings, ignored listing per whitelist
Jan  1 12:45:50.523: INFO: namespace e2e-tests-projected-kw2zr deletion completed in 6.312410619s

• [SLOW TEST:17.227 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:45:50.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 12:45:50.860: INFO: Number of nodes with available pods: 0
Jan  1 12:45:50.860: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:52.320: INFO: Number of nodes with available pods: 0
Jan  1 12:45:52.321: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:53.692: INFO: Number of nodes with available pods: 0
Jan  1 12:45:53.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:53.897: INFO: Number of nodes with available pods: 0
Jan  1 12:45:53.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:54.901: INFO: Number of nodes with available pods: 0
Jan  1 12:45:54.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:55.893: INFO: Number of nodes with available pods: 0
Jan  1 12:45:55.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:57.145: INFO: Number of nodes with available pods: 0
Jan  1 12:45:57.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:58.327: INFO: Number of nodes with available pods: 0
Jan  1 12:45:58.327: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:58.892: INFO: Number of nodes with available pods: 0
Jan  1 12:45:58.892: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:45:59.903: INFO: Number of nodes with available pods: 0
Jan  1 12:45:59.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:00.907: INFO: Number of nodes with available pods: 1
Jan  1 12:46:00.907: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  1 12:46:00.952: INFO: Number of nodes with available pods: 0
Jan  1 12:46:00.952: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:01.978: INFO: Number of nodes with available pods: 0
Jan  1 12:46:01.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:03.414: INFO: Number of nodes with available pods: 0
Jan  1 12:46:03.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:04.013: INFO: Number of nodes with available pods: 0
Jan  1 12:46:04.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:06.043: INFO: Number of nodes with available pods: 0
Jan  1 12:46:06.043: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:06.991: INFO: Number of nodes with available pods: 0
Jan  1 12:46:06.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:08.092: INFO: Number of nodes with available pods: 0
Jan  1 12:46:08.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:08.987: INFO: Number of nodes with available pods: 0
Jan  1 12:46:08.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:09.977: INFO: Number of nodes with available pods: 0
Jan  1 12:46:09.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:10.979: INFO: Number of nodes with available pods: 0
Jan  1 12:46:10.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:11.981: INFO: Number of nodes with available pods: 0
Jan  1 12:46:11.981: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:13.025: INFO: Number of nodes with available pods: 0
Jan  1 12:46:13.025: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:14.068: INFO: Number of nodes with available pods: 0
Jan  1 12:46:14.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:14.984: INFO: Number of nodes with available pods: 0
Jan  1 12:46:14.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:15.999: INFO: Number of nodes with available pods: 0
Jan  1 12:46:15.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:16.976: INFO: Number of nodes with available pods: 0
Jan  1 12:46:16.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:18.160: INFO: Number of nodes with available pods: 0
Jan  1 12:46:18.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:18.976: INFO: Number of nodes with available pods: 0
Jan  1 12:46:18.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:19.972: INFO: Number of nodes with available pods: 0
Jan  1 12:46:19.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:20.984: INFO: Number of nodes with available pods: 0
Jan  1 12:46:20.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:21.976: INFO: Number of nodes with available pods: 0
Jan  1 12:46:21.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  1 12:46:22.972: INFO: Number of nodes with available pods: 1
Jan  1 12:46:22.972: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6khmc, will wait for the garbage collector to delete the pods
Jan  1 12:46:23.066: INFO: Deleting DaemonSet.extensions daemon-set took: 31.71116ms
Jan  1 12:46:23.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.417907ms
Jan  1 12:46:32.684: INFO: Number of nodes with available pods: 0
Jan  1 12:46:32.684: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 12:46:32.689: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6khmc/daemonsets","resourceVersion":"16799090"},"items":null}

Jan  1 12:46:32.692: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6khmc/pods","resourceVersion":"16799090"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:46:32.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6khmc" for this suite.
Jan  1 12:46:38.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:46:38.887: INFO: namespace: e2e-tests-daemonsets-6khmc, resource: bindings, ignored listing per whitelist
Jan  1 12:46:38.910: INFO: namespace e2e-tests-daemonsets-6khmc deletion completed in 6.203277979s

• [SLOW TEST:48.386 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:46:38.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0101 12:47:09.844983       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 12:47:09.845: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:47:09.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cm6bh" for this suite.
Jan  1 12:47:20.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:47:21.050: INFO: namespace: e2e-tests-gc-cm6bh, resource: bindings, ignored listing per whitelist
Jan  1 12:47:21.252: INFO: namespace e2e-tests-gc-cm6bh deletion completed in 11.392599424s

• [SLOW TEST:42.342 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:47:21.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rxgsr
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 12:47:21.558: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 12:48:02.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rxgsr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 12:48:02.271: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 12:48:02.941: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:48:02.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rxgsr" for this suite.
Jan  1 12:48:33.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:48:33.159: INFO: namespace: e2e-tests-pod-network-test-rxgsr, resource: bindings, ignored listing per whitelist
Jan  1 12:48:33.239: INFO: namespace e2e-tests-pod-network-test-rxgsr deletion completed in 30.27591362s

• [SLOW TEST:71.986 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:48:33.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  1 12:48:33.531: INFO: Waiting up to 5m0s for pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005" in namespace "e2e-tests-containers-q8htl" to be "success or failure"
Jan  1 12:48:33.542: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.022762ms
Jan  1 12:48:35.555: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023717698s
Jan  1 12:48:37.576: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04455985s
Jan  1 12:48:39.597: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066273165s
Jan  1 12:48:41.613: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082107048s
Jan  1 12:48:43.665: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13382066s
Jan  1 12:48:46.634: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.102712758s
STEP: Saw pod success
Jan  1 12:48:46.634: INFO: Pod "client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:48:46.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:48:46.877: INFO: Waiting for pod client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:48:46.883: INFO: Pod client-containers-0514b7ef-2c95-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:48:46.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-q8htl" for this suite.
Jan  1 12:48:53.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:48:53.111: INFO: namespace: e2e-tests-containers-q8htl, resource: bindings, ignored listing per whitelist
Jan  1 12:48:53.202: INFO: namespace e2e-tests-containers-q8htl deletion completed in 6.311940153s

• [SLOW TEST:19.963 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:48:53.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 12:48:53.788: INFO: Waiting up to 5m0s for pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-85x77" to be "success or failure"
Jan  1 12:48:53.832: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.344207ms
Jan  1 12:48:55.939: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150888356s
Jan  1 12:48:57.955: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166732579s
Jan  1 12:49:00.613: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.824856301s
Jan  1 12:49:02.636: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847826501s
Jan  1 12:49:04.680: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.891670037s
STEP: Saw pod success
Jan  1 12:49:04.680: INFO: Pod "pod-1125f2af-2c95-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:49:04.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1125f2af-2c95-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 12:49:04.814: INFO: Waiting for pod pod-1125f2af-2c95-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:49:04.819: INFO: Pod pod-1125f2af-2c95-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:49:04.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-85x77" for this suite.
Jan  1 12:49:10.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:49:11.104: INFO: namespace: e2e-tests-emptydir-85x77, resource: bindings, ignored listing per whitelist
Jan  1 12:49:11.172: INFO: namespace e2e-tests-emptydir-85x77 deletion completed in 6.257507027s

• [SLOW TEST:17.969 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:49:11.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-1ba53657-2c95-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:49:11.479: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-gvdx8" to be "success or failure"
Jan  1 12:49:11.490: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.780899ms
Jan  1 12:49:13.787: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307557764s
Jan  1 12:49:15.799: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319990431s
Jan  1 12:49:17.814: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334968831s
Jan  1 12:49:19.836: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356322399s
Jan  1 12:49:21.871: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392012358s
STEP: Saw pod success
Jan  1 12:49:21.872: INFO: Pod "pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:49:21.880: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 12:49:23.689: INFO: Waiting for pod pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:49:23.708: INFO: Pod pod-configmaps-1ba66c99-2c95-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:49:23.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gvdx8" for this suite.
Jan  1 12:49:29.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:49:29.872: INFO: namespace: e2e-tests-configmap-gvdx8, resource: bindings, ignored listing per whitelist
Jan  1 12:49:29.970: INFO: namespace e2e-tests-configmap-gvdx8 deletion completed in 6.239426901s

• [SLOW TEST:18.797 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:49:29.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  1 12:49:30.249: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix151326283/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:49:30.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ggk2x" for this suite.
Jan  1 12:49:36.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:49:36.537: INFO: namespace: e2e-tests-kubectl-ggk2x, resource: bindings, ignored listing per whitelist
Jan  1 12:49:36.892: INFO: namespace e2e-tests-kubectl-ggk2x deletion completed in 6.497993436s

• [SLOW TEST:6.923 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:49:36.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:49:37.111: INFO: Creating deployment "nginx-deployment"
Jan  1 12:49:37.325: INFO: Waiting for observed generation 1
Jan  1 12:49:40.412: INFO: Waiting for all required pods to come up
Jan  1 12:49:40.446: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  1 12:50:23.686: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  1 12:50:23.720: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  1 12:50:23.789: INFO: Updating deployment nginx-deployment
Jan  1 12:50:23.789: INFO: Waiting for observed generation 2
Jan  1 12:50:27.419: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  1 12:50:27.945: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  1 12:50:28.820: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  1 12:50:28.851: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  1 12:50:28.852: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  1 12:50:29.002: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  1 12:50:29.039: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  1 12:50:29.039: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  1 12:50:29.936: INFO: Updating deployment nginx-deployment
Jan  1 12:50:29.936: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  1 12:50:31.216: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  1 12:50:38.631: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 12:50:41.433: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-7wq47,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7wq47/deployments/nginx-deployment,UID:2afc198c-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799817,Generation:3,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-01 12:50:31 +0000 UTC 2020-01-01 12:50:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-01 12:50:40 +0000 UTC 2020-01-01 12:49:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  1 12:50:42.727: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-7wq47,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7wq47/replicasets/nginx-deployment-5c98f8fb5,UID:46ceb200-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799814,Generation:3,CreationTimestamp:2020-01-01 12:50:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2afc198c-2c95-11ea-a994-fa163e34d433 0xc002bf3ec7 0xc002bf3ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 12:50:42.728: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  1 12:50:42.729: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-7wq47,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7wq47/replicasets/nginx-deployment-85ddf47c5d,UID:2b20d1da-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799803,Generation:3,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2afc198c-2c95-11ea-a994-fa163e34d433 0xc002bf3f87 0xc002bf3f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  1 12:50:42.783: INFO: Pod "nginx-deployment-5c98f8fb5-25m67" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-25m67,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-25m67,UID:4d13d9f2-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799789,Generation:0,CreationTimestamp:2020-01-01 12:50:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccd597 0xc001ccd598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.784: INFO: Pod "nginx-deployment-5c98f8fb5-5vwtx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5vwtx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-5vwtx,UID:4c2a5506-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799771,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccd697 0xc001ccd698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.785: INFO: Pod "nginx-deployment-5c98f8fb5-6dfx2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6dfx2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-6dfx2,UID:4fcaeaee-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799805,Generation:0,CreationTimestamp:2020-01-01 12:50:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccd797 0xc001ccd798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.786: INFO: Pod "nginx-deployment-5c98f8fb5-72d9q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-72d9q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-72d9q,UID:4d14281a-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799795,Generation:0,CreationTimestamp:2020-01-01 12:50:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccd897 0xc001ccd898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccd900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccd920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.786: INFO: Pod "nginx-deployment-5c98f8fb5-cqbqr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cqbqr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-cqbqr,UID:47271cbb-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799735,Generation:0,CreationTimestamp:2020-01-01 12:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccd997 0xc001ccd998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccda00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccda20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.787: INFO: Pod "nginx-deployment-5c98f8fb5-gtghp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gtghp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-gtghp,UID:4c2a5975-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799772,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccdae7 0xc001ccdae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdb50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.787: INFO: Pod "nginx-deployment-5c98f8fb5-jszjp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jszjp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-jszjp,UID:46ebe40b-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799726,Generation:0,CreationTimestamp:2020-01-01 12:50:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccdbe7 0xc001ccdbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.788: INFO: Pod "nginx-deployment-5c98f8fb5-ktsdx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ktsdx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-ktsdx,UID:47061105-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799730,Generation:0,CreationTimestamp:2020-01-01 12:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccdd37 0xc001ccdd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccddc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.789: INFO: Pod "nginx-deployment-5c98f8fb5-llt2n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-llt2n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-llt2n,UID:472cce25-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799743,Generation:0,CreationTimestamp:2020-01-01 12:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccde87 0xc001ccde88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ccdef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ccdf10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.789: INFO: Pod "nginx-deployment-5c98f8fb5-mwxch" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mwxch,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-mwxch,UID:4d14229d-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799791,Generation:0,CreationTimestamp:2020-01-01 12:50:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc001ccdfe7 0xc001ccdfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029520b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.790: INFO: Pod "nginx-deployment-5c98f8fb5-sflmf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sflmf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-sflmf,UID:47060a61-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799734,Generation:0,CreationTimestamp:2020-01-01 12:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc002952127 0xc002952128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029521b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.790: INFO: Pod "nginx-deployment-5c98f8fb5-v6bzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v6bzv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-v6bzv,UID:4b419859-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799759,Generation:0,CreationTimestamp:2020-01-01 12:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc0029522a7 0xc0029522a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.791: INFO: Pod "nginx-deployment-5c98f8fb5-zrh7w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zrh7w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-5c98f8fb5-zrh7w,UID:4d142b1e-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799794,Generation:0,CreationTimestamp:2020-01-01 12:50:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46ceb200-2c95-11ea-a994-fa163e34d433 0xc002952427 0xc002952428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029524b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.791: INFO: Pod "nginx-deployment-85ddf47c5d-6dp4h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dp4h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-6dp4h,UID:4cb90c54-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799785,Generation:0,CreationTimestamp:2020-01-01 12:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952527 0xc002952528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.792: INFO: Pod "nginx-deployment-85ddf47c5d-7t85r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7t85r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-7t85r,UID:4cbc14ae-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799788,Generation:0,CreationTimestamp:2020-01-01 12:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952697 0xc002952698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.793: INFO: Pod "nginx-deployment-85ddf47c5d-7vxqx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7vxqx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-7vxqx,UID:2b7e9da2-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799651,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952ad7 0xc002952ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-01 12:49:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f42c77e15145051f6edffe6fbd9f0029593b5ef43b821f204046c7d256f60b78}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.793: INFO: Pod "nginx-deployment-85ddf47c5d-bxtz5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bxtz5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-bxtz5,UID:4b40f9e7-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799753,Generation:0,CreationTimestamp:2020-01-01 12:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952c87 0xc002952c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.794: INFO: Pod "nginx-deployment-85ddf47c5d-cw4ff" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cw4ff,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-cw4ff,UID:2b58c4ae-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799655,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952e07 0xc002952e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002952e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002952e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-01 12:49:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://66956a0b1628d15143de171f132bbf17577118fed8538e70edb2b81753d0df02}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.794: INFO: Pod "nginx-deployment-85ddf47c5d-ddcd9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ddcd9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-ddcd9,UID:4cbd5280-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799790,Generation:0,CreationTimestamp:2020-01-01 12:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002952fd7 0xc002952fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002953040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002953060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.795: INFO: Pod "nginx-deployment-85ddf47c5d-hgx5j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hgx5j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-hgx5j,UID:4c2b71ce-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799774,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0029536f7 0xc0029536f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002953760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002953780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.796: INFO: Pod "nginx-deployment-85ddf47c5d-kkrwq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kkrwq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-kkrwq,UID:2b389c25-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799679,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0029537f7 0xc0029537f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029538d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002953900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-01 12:49:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fcf0b3933c3b29e267895a3a725192164fab047398aedddcd285a8a47009eaff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.797: INFO: Pod "nginx-deployment-85ddf47c5d-lsw8h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lsw8h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-lsw8h,UID:4cbb9eef-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799786,Generation:0,CreationTimestamp:2020-01-01 12:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc002953c77 0xc002953c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002953ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002953d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.797: INFO: Pod "nginx-deployment-85ddf47c5d-plqd5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-plqd5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-plqd5,UID:4b33695e-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799819,Generation:0,CreationTimestamp:2020-01-01 12:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4297 0xc0025b4298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-01 12:50:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.798: INFO: Pod "nginx-deployment-85ddf47c5d-pr4s9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pr4s9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-pr4s9,UID:2b3f9b6d-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799637,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4587 0xc0025b4588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b45f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-01 12:49:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://69deece7fb27bf4ad37d01a534c1bf946c85f6d653ddf508ec7e065d178310c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.799: INFO: Pod "nginx-deployment-85ddf47c5d-pvq8q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvq8q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-pvq8q,UID:2b7e4a66-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799666,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4757 0xc0025b4758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b47c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b47e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-01 12:49:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1d2808d281a3a95a531b033991b92cb2969fb344970dbb2548c5401a1f3cc7ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.799: INFO: Pod "nginx-deployment-85ddf47c5d-rp8dp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rp8dp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-rp8dp,UID:4c2b1a21-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799764,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4917 0xc0025b4918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b49a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:33 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.800: INFO: Pod "nginx-deployment-85ddf47c5d-sq57s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sq57s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-sq57s,UID:2b58b25a-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799649,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4a17 0xc0025b4a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-01 12:49:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5406a1c4d4d77a839ffeba90321d684d6b248944afc2ffcdeba7e8093fdb49cc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.800: INFO: Pod "nginx-deployment-85ddf47c5d-tj4lv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tj4lv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-tj4lv,UID:4c2b003c-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799763,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4bd7 0xc0025b4bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:33 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.801: INFO: Pod "nginx-deployment-85ddf47c5d-tpbhd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tpbhd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-tpbhd,UID:4c2b5037-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799773,Generation:0,CreationTimestamp:2020-01-01 12:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4cd7 0xc0025b4cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:34 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.802: INFO: Pod "nginx-deployment-85ddf47c5d-vnwzn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vnwzn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-vnwzn,UID:2b5854d2-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799663,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b4dd7 0xc0025b4dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b4f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b4fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-01 12:49:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://579e423c8ebb12919c45076f1d61a7b6e9ece8dd1057ea133ee55e94e6c56e41}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.803: INFO: Pod "nginx-deployment-85ddf47c5d-x98sh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x98sh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-x98sh,UID:4b415b86-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799758,Generation:0,CreationTimestamp:2020-01-01 12:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b5067 0xc0025b5068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b50d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b51d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.803: INFO: Pod "nginx-deployment-85ddf47c5d-xks9j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xks9j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-xks9j,UID:2b4032bb-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799674,Generation:0,CreationTimestamp:2020-01-01 12:49:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b5277 0xc0025b5278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b5770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b5790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:49:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-01 12:49:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:50:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9238af5a845daaced3d389effb0693899e327e79158cc772f811a7e53782b723}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  1 12:50:42.804: INFO: Pod "nginx-deployment-85ddf47c5d-xl8jw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xl8jw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-7wq47,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7wq47/pods/nginx-deployment-85ddf47c5d-xl8jw,UID:4cba3bd8-2c95-11ea-a994-fa163e34d433,ResourceVersion:16799783,Generation:0,CreationTimestamp:2020-01-01 12:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2b20d1da-2c95-11ea-a994-fa163e34d433 0xc0025b5a67 0xc0025b5a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkmkx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkmkx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkmkx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025b5ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025b5af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:50:35 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:50:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7wq47" for this suite.
Jan  1 12:51:38.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:51:38.732: INFO: namespace: e2e-tests-deployment-7wq47, resource: bindings, ignored listing per whitelist
Jan  1 12:51:38.789: INFO: namespace e2e-tests-deployment-7wq47 deletion completed in 54.873261527s

• [SLOW TEST:121.896 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:51:38.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  1 12:51:39.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4hxhv'
Jan  1 12:51:42.773: INFO: stderr: ""
Jan  1 12:51:42.774: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  1 12:51:43.797: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:43.797: INFO: Found 0 / 1
Jan  1 12:51:44.798: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:44.798: INFO: Found 0 / 1
Jan  1 12:51:46.584: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:46.585: INFO: Found 0 / 1
Jan  1 12:51:46.826: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:46.827: INFO: Found 0 / 1
Jan  1 12:51:47.795: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:47.795: INFO: Found 0 / 1
Jan  1 12:51:49.467: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:49.468: INFO: Found 0 / 1
Jan  1 12:51:49.841: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:49.842: INFO: Found 0 / 1
Jan  1 12:51:50.793: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:50.794: INFO: Found 0 / 1
Jan  1 12:51:51.984: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:51.984: INFO: Found 0 / 1
Jan  1 12:51:53.080: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:53.081: INFO: Found 0 / 1
Jan  1 12:51:53.893: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:53.893: INFO: Found 0 / 1
Jan  1 12:51:54.814: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:54.814: INFO: Found 0 / 1
Jan  1 12:51:56.289: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:56.289: INFO: Found 0 / 1
Jan  1 12:51:57.129: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:57.130: INFO: Found 0 / 1
Jan  1 12:51:57.794: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:57.794: INFO: Found 0 / 1
Jan  1 12:51:58.793: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:58.793: INFO: Found 0 / 1
Jan  1 12:51:59.800: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:51:59.800: INFO: Found 0 / 1
Jan  1 12:52:00.802: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:52:00.802: INFO: Found 1 / 1
Jan  1 12:52:00.802: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  1 12:52:00.808: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 12:52:00.808: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  1 12:52:00.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv'
Jan  1 12:52:01.034: INFO: stderr: ""
Jan  1 12:52:01.034: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jan 12:51:59.492 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 12:51:59.492 # Server started, Redis version 3.2.12\n1:M 01 Jan 12:51:59.493 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 12:51:59.493 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  1 12:52:01.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv --tail=1'
Jan  1 12:52:01.247: INFO: stderr: ""
Jan  1 12:52:01.248: INFO: stdout: "1:M 01 Jan 12:51:59.493 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  1 12:52:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv --limit-bytes=1'
Jan  1 12:52:01.434: INFO: stderr: ""
Jan  1 12:52:01.434: INFO: stdout: " "
STEP: exposing timestamps
Jan  1 12:52:01.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv --tail=1 --timestamps'
Jan  1 12:52:01.558: INFO: stderr: ""
Jan  1 12:52:01.559: INFO: stdout: "2020-01-01T12:51:59.495804214Z 1:M 01 Jan 12:51:59.493 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  1 12:52:04.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv --since=1s'
Jan  1 12:52:04.217: INFO: stderr: ""
Jan  1 12:52:04.217: INFO: stdout: ""
Jan  1 12:52:04.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pfwnm redis-master --namespace=e2e-tests-kubectl-4hxhv --since=24h'
Jan  1 12:52:04.410: INFO: stderr: ""
Jan  1 12:52:04.411: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jan 12:51:59.492 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 12:51:59.492 # Server started, Redis version 3.2.12\n1:M 01 Jan 12:51:59.493 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 12:51:59.493 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  1 12:52:04.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4hxhv'
Jan  1 12:52:04.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:52:04.680: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  1 12:52:04.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-4hxhv'
Jan  1 12:52:04.946: INFO: stderr: "No resources found.\n"
Jan  1 12:52:04.947: INFO: stdout: ""
Jan  1 12:52:04.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-4hxhv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 12:52:05.180: INFO: stderr: ""
Jan  1 12:52:05.180: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:52:05.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4hxhv" for this suite.
Jan  1 12:52:29.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:52:29.297: INFO: namespace: e2e-tests-kubectl-4hxhv, resource: bindings, ignored listing per whitelist
Jan  1 12:52:29.484: INFO: namespace e2e-tests-kubectl-4hxhv deletion completed in 24.294302502s

• [SLOW TEST:50.694 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:52:29.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  1 12:52:29.677: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  1 12:52:29.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:30.177: INFO: stderr: ""
Jan  1 12:52:30.178: INFO: stdout: "service/redis-slave created\n"
Jan  1 12:52:30.178: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  1 12:52:30.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:30.699: INFO: stderr: ""
Jan  1 12:52:30.699: INFO: stdout: "service/redis-master created\n"
Jan  1 12:52:30.701: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  1 12:52:30.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:31.473: INFO: stderr: ""
Jan  1 12:52:31.473: INFO: stdout: "service/frontend created\n"
Jan  1 12:52:31.476: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  1 12:52:31.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:31.943: INFO: stderr: ""
Jan  1 12:52:31.944: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  1 12:52:31.945: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  1 12:52:31.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:32.748: INFO: stderr: ""
Jan  1 12:52:32.748: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  1 12:52:32.749: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  1 12:52:32.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:52:35.153: INFO: stderr: ""
Jan  1 12:52:35.154: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  1 12:52:35.154: INFO: Waiting for all frontend pods to be Running.
Jan  1 12:53:05.208: INFO: Waiting for frontend to serve content.
Jan  1 12:53:08.981: INFO: Trying to add a new entry to the guestbook.
Jan  1 12:53:09.081: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  1 12:53:09.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:09.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:09.561: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 12:53:09.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:09.842: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:09.843: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 12:53:09.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:10.022: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:10.023: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 12:53:10.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:10.144: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:10.144: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 12:53:10.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:10.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:10.399: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 12:53:10.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2wzpt'
Jan  1 12:53:10.800: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 12:53:10.800: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:53:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2wzpt" for this suite.
Jan  1 12:53:57.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:53:57.221: INFO: namespace: e2e-tests-kubectl-2wzpt, resource: bindings, ignored listing per whitelist
Jan  1 12:53:57.262: INFO: namespace e2e-tests-kubectl-2wzpt deletion completed in 46.442298556s

• [SLOW TEST:87.778 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:53:57.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 12:54:07.654: INFO: Waiting up to 5m0s for pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005" in namespace "e2e-tests-pods-ptt68" to be "success or failure"
Jan  1 12:54:07.662: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561004ms
Jan  1 12:54:09.825: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171265968s
Jan  1 12:54:11.928: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274489448s
Jan  1 12:54:13.947: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29326869s
Jan  1 12:54:15.971: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316730366s
Jan  1 12:54:17.983: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.329556482s
STEP: Saw pod success
Jan  1 12:54:17.984: INFO: Pod "client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:54:17.989: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  1 12:54:18.625: INFO: Waiting for pod client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:54:18.939: INFO: Pod client-envvars-cc3ac64a-2c95-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:54:18.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ptt68" for this suite.
Jan  1 12:55:05.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:55:05.239: INFO: namespace: e2e-tests-pods-ptt68, resource: bindings, ignored listing per whitelist
Jan  1 12:55:05.282: INFO: namespace e2e-tests-pods-ptt68 deletion completed in 46.328764228s

• [SLOW TEST:68.018 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:55:05.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-eebf82aa-2c95-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-eebf82aa-2c95-11ea-8bf6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:55:22.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c76jt" for this suite.
Jan  1 12:55:50.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:55:50.253: INFO: namespace: e2e-tests-projected-c76jt, resource: bindings, ignored listing per whitelist
Jan  1 12:55:50.291: INFO: namespace e2e-tests-projected-c76jt deletion completed in 28.223111729s

• [SLOW TEST:45.010 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:55:50.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 12:55:50.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c7rfb'
Jan  1 12:55:50.772: INFO: stderr: ""
Jan  1 12:55:50.773: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  1 12:55:50.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-c7rfb'
Jan  1 12:56:02.743: INFO: stderr: ""
Jan  1 12:56:02.744: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:56:02.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c7rfb" for this suite.
Jan  1 12:56:09.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:56:09.352: INFO: namespace: e2e-tests-kubectl-c7rfb, resource: bindings, ignored listing per whitelist
Jan  1 12:56:09.518: INFO: namespace e2e-tests-kubectl-c7rfb deletion completed in 6.666631455s

• [SLOW TEST:19.226 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:56:09.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  1 12:56:09.959: INFO: PodSpec: initContainers in spec.initContainers
Jan  1 12:57:20.589: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1523a7ad-2c96-11ea-8bf6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-9zbs5", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-9zbs5/pods/pod-init-1523a7ad-2c96-11ea-8bf6-0242ac110005", UID:"15267760-2c96-11ea-a994-fa163e34d433", ResourceVersion:"16800776", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713480169, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"959672400"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fw4cc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001340b40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fw4cc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fw4cc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fw4cc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dd9938), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fd6a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dd99b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dd9a60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001dd9a68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dd9a6c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480170, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480170, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480170, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480170, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001980360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e55ab0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e55b20)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a8ae4cc0dc14b070e5d860148bf4afb81ff769b964c983624c571537f61bbbc9"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0019803a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001980380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:57:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-9zbs5" for this suite.
Jan  1 12:57:44.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:57:45.072: INFO: namespace: e2e-tests-init-container-9zbs5, resource: bindings, ignored listing per whitelist
Jan  1 12:57:45.075: INFO: namespace e2e-tests-init-container-9zbs5 deletion completed in 24.459878292s

• [SLOW TEST:95.557 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:57:45.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9sc48
Jan  1 12:58:01.530: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9sc48
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 12:58:01.543: INFO: Initial restart count of pod liveness-exec is 0
Jan  1 12:59:03.122: INFO: Restart count of pod e2e-tests-container-probe-9sc48/liveness-exec is now 1 (1m1.578811611s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:59:03.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9sc48" for this suite.
Jan  1 12:59:15.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:59:15.588: INFO: namespace: e2e-tests-container-probe-9sc48, resource: bindings, ignored listing per whitelist
Jan  1 12:59:15.718: INFO: namespace e2e-tests-container-probe-9sc48 deletion completed in 12.382967655s

• [SLOW TEST:90.643 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:59:15.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-83fc23fe-2c96-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 12:59:15.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-n9jvg" to be "success or failure"
Jan  1 12:59:16.080: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.217501ms
Jan  1 12:59:18.365: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37025464s
Jan  1 12:59:20.395: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400008988s
Jan  1 12:59:22.422: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427254426s
Jan  1 12:59:26.569: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.574313453s
Jan  1 12:59:28.651: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.656563735s
Jan  1 12:59:32.055: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.059760838s
Jan  1 12:59:34.932: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.937298228s
STEP: Saw pod success
Jan  1 12:59:34.933: INFO: Pod "pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 12:59:35.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 12:59:37.241: INFO: Waiting for pod pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005 to disappear
Jan  1 12:59:37.249: INFO: Pod pod-configmaps-840036a0-2c96-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 12:59:37.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n9jvg" for this suite.
Jan  1 12:59:45.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 12:59:45.734: INFO: namespace: e2e-tests-configmap-n9jvg, resource: bindings, ignored listing per whitelist
Jan  1 12:59:45.793: INFO: namespace e2e-tests-configmap-n9jvg deletion completed in 8.537693731s

• [SLOW TEST:30.073 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 12:59:45.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  1 12:59:57.047: INFO: 10 pods remaining
Jan  1 12:59:57.048: INFO: 10 pods has nil DeletionTimestamp
Jan  1 12:59:57.048: INFO: 
Jan  1 13:00:02.661: INFO: 10 pods remaining
Jan  1 13:00:02.662: INFO: 0 pods has nil DeletionTimestamp
Jan  1 13:00:02.662: INFO: 
Jan  1 13:00:04.268: INFO: 0 pods remaining
Jan  1 13:00:04.269: INFO: 0 pods has nil DeletionTimestamp
Jan  1 13:00:04.269: INFO: 
Jan  1 13:00:05.225: INFO: 0 pods remaining
Jan  1 13:00:05.225: INFO: 0 pods has nil DeletionTimestamp
Jan  1 13:00:05.225: INFO: 
Jan  1 13:00:07.278: INFO: 0 pods remaining
Jan  1 13:00:07.278: INFO: 0 pods has nil DeletionTimestamp
Jan  1 13:00:07.278: INFO: 
STEP: Gathering metrics
W0101 13:00:07.697289       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:00:07.697: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:00:07.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wq79d" for this suite.
Jan  1 13:00:25.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:00:26.044: INFO: namespace: e2e-tests-gc-wq79d, resource: bindings, ignored listing per whitelist
Jan  1 13:00:26.118: INFO: namespace e2e-tests-gc-wq79d deletion completed in 18.412325854s

• [SLOW TEST:40.324 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:00:26.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 13:00:26.593: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  1 13:00:31.643: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 13:00:41.709: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  1 13:00:43.751: INFO: Creating deployment "test-rollover-deployment"
Jan  1 13:00:43.912: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  1 13:00:46.940: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  1 13:00:47.099: INFO: Ensure that both replica sets have 1 created replica
Jan  1 13:00:47.120: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  1 13:00:47.144: INFO: Updating deployment test-rollover-deployment
Jan  1 13:00:47.144: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  1 13:00:49.580: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  1 13:00:49.590: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  1 13:00:49.596: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:49.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480449, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:00:51.711: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:51.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480449, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:00:53.640: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:53.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480449, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:00:55.948: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:55.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480449, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:00:57.607: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:57.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480449, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:00:59.619: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:00:59.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:01.614: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:01:01.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:03.654: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:01:03.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:05.705: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:01:05.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:07.645: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 13:01:07.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:09.733: INFO: 
Jan  1 13:01:09.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480444, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480469, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713480443, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:01:11.623: INFO: 
Jan  1 13:01:11.623: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  1 13:01:11.640: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-l5gjz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l5gjz/deployments/test-rollover-deployment,UID:b856101e-2c96-11ea-a994-fa163e34d433,ResourceVersion:16801266,Generation:2,CreationTimestamp:2020-01-01 13:00:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-01 13:00:44 +0000 UTC 2020-01-01 13:00:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-01 13:01:09 +0000 UTC 2020-01-01 13:00:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  1 13:01:11.650: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-l5gjz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l5gjz/replicasets/test-rollover-deployment-5b8479fdb6,UID:ba5c3318-2c96-11ea-a994-fa163e34d433,ResourceVersion:16801254,Generation:2,CreationTimestamp:2020-01-01 13:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b856101e-2c96-11ea-a994-fa163e34d433 0xc001ccc967 0xc001ccc968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 13:01:11.650: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  1 13:01:11.651: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-l5gjz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l5gjz/replicasets/test-rollover-controller,UID:ae03798b-2c96-11ea-a994-fa163e34d433,ResourceVersion:16801265,Generation:2,CreationTimestamp:2020-01-01 13:00:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b856101e-2c96-11ea-a994-fa163e34d433 0xc001ccc7a7 0xc001ccc7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 13:01:11.651: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-l5gjz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l5gjz/replicasets/test-rollover-deployment-58494b7559,UID:b875d27d-2c96-11ea-a994-fa163e34d433,ResourceVersion:16801221,Generation:2,CreationTimestamp:2020-01-01 13:00:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b856101e-2c96-11ea-a994-fa163e34d433 0xc001ccc897 0xc001ccc898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 13:01:11.661: INFO: Pod "test-rollover-deployment-5b8479fdb6-lkmdk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lkmdk,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-l5gjz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l5gjz/pods/test-rollover-deployment-5b8479fdb6-lkmdk,UID:bb27f7b6-2c96-11ea-a994-fa163e34d433,ResourceVersion:16801239,Generation:0,CreationTimestamp:2020-01-01 13:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 ba5c3318-2c96-11ea-a994-fa163e34d433 0xc001f330c7 0xc001f330c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhm9k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhm9k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xhm9k true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f33130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f33150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:00:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:00:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:00:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:00:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-01 13:00:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-01 13:00:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://975c3b2486296400cc30d71da7284c94909936f411be5d7b36dfd3f9728b5372}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:01:11.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-l5gjz" for this suite.
Jan  1 13:01:21.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:01:22.258: INFO: namespace: e2e-tests-deployment-l5gjz, resource: bindings, ignored listing per whitelist
Jan  1 13:01:22.376: INFO: namespace e2e-tests-deployment-l5gjz deletion completed in 10.704252922s

• [SLOW TEST:56.257 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:01:22.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  1 13:01:22.681: INFO: Creating ReplicaSet my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005
Jan  1 13:01:22.710: INFO: Pod name my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005: Found 0 pods out of 1
Jan  1 13:01:28.016: INFO: Pod name my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005: Found 1 pods out of 1
Jan  1 13:01:28.016: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005" is running
Jan  1 13:01:34.069: INFO: Pod "my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005-cv7n2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:01:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:01:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:01:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:01:22 +0000 UTC Reason: Message:}])
Jan  1 13:01:34.070: INFO: Trying to dial the pod
Jan  1 13:01:39.126: INFO: Controller my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005-cv7n2]: "my-hostname-basic-cf894ae7-2c96-11ea-8bf6-0242ac110005-cv7n2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:01:39.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-fkbss" for this suite.
Jan  1 13:01:48.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:01:51.184: INFO: namespace: e2e-tests-replicaset-fkbss, resource: bindings, ignored listing per whitelist
Jan  1 13:01:51.196: INFO: namespace e2e-tests-replicaset-fkbss deletion completed in 12.059929245s

• [SLOW TEST:28.820 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:01:51.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:02:06.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qkjdr" for this suite.
Jan  1 13:02:48.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:02:48.351: INFO: namespace: e2e-tests-kubelet-test-qkjdr, resource: bindings, ignored listing per whitelist
Jan  1 13:02:48.362: INFO: namespace e2e-tests-kubelet-test-qkjdr deletion completed in 42.200278731s

• [SLOW TEST:57.165 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:02:48.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  1 13:02:48.867: INFO: Waiting up to 5m0s for pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-b5m45" to be "success or failure"
Jan  1 13:02:48.888: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.666967ms
Jan  1 13:02:50.911: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043955649s
Jan  1 13:02:52.920: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05305421s
Jan  1 13:02:54.996: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129220645s
Jan  1 13:02:57.020: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153208049s
Jan  1 13:02:59.049: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181389285s
STEP: Saw pod success
Jan  1 13:02:59.049: INFO: Pod "pod-02e3ff27-2c97-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:02:59.055: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-02e3ff27-2c97-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 13:02:59.199: INFO: Waiting for pod pod-02e3ff27-2c97-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:02:59.214: INFO: Pod pod-02e3ff27-2c97-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:02:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b5m45" for this suite.
Jan  1 13:03:05.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:03:05.504: INFO: namespace: e2e-tests-emptydir-b5m45, resource: bindings, ignored listing per whitelist
Jan  1 13:03:05.564: INFO: namespace e2e-tests-emptydir-b5m45 deletion completed in 6.280813435s

• [SLOW TEST:17.202 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:03:05.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-x7kd6
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 13:03:05.888: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 13:03:38.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-x7kd6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 13:03:38.432: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 13:03:39.997: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:03:39.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-x7kd6" for this suite.
Jan  1 13:04:06.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:04:06.223: INFO: namespace: e2e-tests-pod-network-test-x7kd6, resource: bindings, ignored listing per whitelist
Jan  1 13:04:06.248: INFO: namespace e2e-tests-pod-network-test-x7kd6 deletion completed in 26.230451304s

• [SLOW TEST:60.684 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:04:06.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  1 13:04:06.544: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  1 13:04:11.563: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:04:14.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-mpbsz" for this suite.
Jan  1 13:04:24.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:04:24.747: INFO: namespace: e2e-tests-replication-controller-mpbsz, resource: bindings, ignored listing per whitelist
Jan  1 13:04:24.792: INFO: namespace e2e-tests-replication-controller-mpbsz deletion completed in 10.542818698s

• [SLOW TEST:18.544 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:04:24.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3d7533b5-2c97-11ea-8bf6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-3d753647-2c97-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3d7533b5-2c97-11ea-8bf6-0242ac110005
STEP: Updating secret s-test-opt-upd-3d753647-2c97-11ea-8bf6-0242ac110005
STEP: Creating secret with name s-test-opt-create-3d7536b3-2c97-11ea-8bf6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:06:01.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gzlh7" for this suite.
Jan  1 13:06:27.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:06:27.336: INFO: namespace: e2e-tests-projected-gzlh7, resource: bindings, ignored listing per whitelist
Jan  1 13:06:27.503: INFO: namespace e2e-tests-projected-gzlh7 deletion completed in 26.254365766s

• [SLOW TEST:122.711 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:06:27.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:06:27.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-clhzl'
Jan  1 13:06:29.723: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 13:06:29.723: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  1 13:06:31.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-clhzl'
Jan  1 13:06:32.356: INFO: stderr: ""
Jan  1 13:06:32.356: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:06:32.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-clhzl" for this suite.
Jan  1 13:06:56.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:06:56.652: INFO: namespace: e2e-tests-kubectl-clhzl, resource: bindings, ignored listing per whitelist
Jan  1 13:06:56.747: INFO: namespace e2e-tests-kubectl-clhzl deletion completed in 24.365083722s

• [SLOW TEST:29.243 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:06:56.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0101 13:07:00.673287       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:07:00.673: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:07:00.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-w58bj" for this suite.
Jan  1 13:07:08.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:08.256: INFO: namespace: e2e-tests-gc-w58bj, resource: bindings, ignored listing per whitelist
Jan  1 13:07:08.322: INFO: namespace e2e-tests-gc-w58bj deletion completed in 7.619489087s

• [SLOW TEST:11.575 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:07:08.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:07:08.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-459qc'
Jan  1 13:07:08.863: INFO: stderr: ""
Jan  1 13:07:08.863: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  1 13:07:23.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-459qc -o json'
Jan  1 13:07:24.210: INFO: stderr: ""
Jan  1 13:07:24.210: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-01T13:07:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-459qc\",\n        \"resourceVersion\": \"16802012\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-459qc/pods/e2e-test-nginx-pod\",\n        \"uid\": \"9dd893e7-2c97-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-qvbtz\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-qvbtz\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-qvbtz\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:07:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:07:20Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:07:20Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:07:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://d3f6c8b41823f670b60c874153476f744488aaa2c89849ba81d7114e6e07a4f0\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-01T13:07:18Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-01T13:07:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  1 13:07:24.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-459qc'
Jan  1 13:07:24.894: INFO: stderr: ""
Jan  1 13:07:24.894: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  1 13:07:25.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-459qc'
Jan  1 13:07:34.083: INFO: stderr: ""
Jan  1 13:07:34.083: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:07:34.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-459qc" for this suite.
Jan  1 13:07:40.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:40.353: INFO: namespace: e2e-tests-kubectl-459qc, resource: bindings, ignored listing per whitelist
Jan  1 13:07:40.471: INFO: namespace e2e-tests-kubectl-459qc deletion completed in 6.365469197s

• [SLOW TEST:32.148 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:07:40.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b0f0b0f9-2c97-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 13:07:40.878: INFO: Waiting up to 5m0s for pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-b4ccp" to be "success or failure"
Jan  1 13:07:40.894: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.104683ms
Jan  1 13:07:42.910: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032522953s
Jan  1 13:07:44.961: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082697753s
Jan  1 13:07:47.017: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139252179s
Jan  1 13:07:49.636: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758236528s
Jan  1 13:07:51.649: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.770718262s
Jan  1 13:07:53.660: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.782636448s
STEP: Saw pod success
Jan  1 13:07:53.661: INFO: Pod "pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:07:53.666: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  1 13:07:54.278: INFO: Waiting for pod pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:07:54.484: INFO: Pod pod-secrets-b0f27cfd-2c97-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:07:54.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-b4ccp" for this suite.
Jan  1 13:08:02.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:08:02.729: INFO: namespace: e2e-tests-secrets-b4ccp, resource: bindings, ignored listing per whitelist
Jan  1 13:08:02.738: INFO: namespace e2e-tests-secrets-b4ccp deletion completed in 8.225615039s

• [SLOW TEST:22.265 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:08:02.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-be27e92b-2c97-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 13:08:03.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-k9ps8" to be "success or failure"
Jan  1 13:08:03.228: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.3142ms
Jan  1 13:08:05.247: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047960993s
Jan  1 13:08:07.271: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0718443s
Jan  1 13:08:09.307: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108246943s
Jan  1 13:08:11.540: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340767505s
Jan  1 13:08:13.557: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.357864479s
Jan  1 13:08:15.568: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.369298945s
STEP: Saw pod success
Jan  1 13:08:15.568: INFO: Pod "pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:08:15.572: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:08:16.435: INFO: Waiting for pod pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:08:16.719: INFO: Pod pod-projected-configmaps-be4035a1-2c97-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:08:16.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k9ps8" for this suite.
Jan  1 13:08:23.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:08:23.247: INFO: namespace: e2e-tests-projected-k9ps8, resource: bindings, ignored listing per whitelist
Jan  1 13:08:23.315: INFO: namespace e2e-tests-projected-k9ps8 deletion completed in 6.246469387s

• [SLOW TEST:20.577 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:08:23.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  1 13:08:23.593: INFO: Waiting up to 5m0s for pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005" in namespace "e2e-tests-emptydir-kppbz" to be "success or failure"
Jan  1 13:08:23.621: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.237139ms
Jan  1 13:08:25.977: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383029148s
Jan  1 13:08:28.001: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407676443s
Jan  1 13:08:32.526: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932449237s
Jan  1 13:08:34.618: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.024234494s
Jan  1 13:08:36.652: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.058079757s
Jan  1 13:08:39.077: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.483459191s
STEP: Saw pod success
Jan  1 13:08:39.078: INFO: Pod "pod-ca59cd49-2c97-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:08:39.341: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ca59cd49-2c97-11ea-8bf6-0242ac110005 container test-container: 
STEP: delete the pod
Jan  1 13:08:41.001: INFO: Waiting for pod pod-ca59cd49-2c97-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:08:41.015: INFO: Pod pod-ca59cd49-2c97-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:08:41.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kppbz" for this suite.
Jan  1 13:08:49.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:08:49.947: INFO: namespace: e2e-tests-emptydir-kppbz, resource: bindings, ignored listing per whitelist
Jan  1 13:08:49.974: INFO: namespace e2e-tests-emptydir-kppbz deletion completed in 8.94635655s

• [SLOW TEST:26.656 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:08:49.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-da7612eb-2c97-11ea-8bf6-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-da76177c-2c97-11ea-8bf6-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-da7612eb-2c97-11ea-8bf6-0242ac110005
STEP: Updating configmap cm-test-opt-upd-da76177c-2c97-11ea-8bf6-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-da76186b-2c97-11ea-8bf6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:09:11.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h75c2" for this suite.
Jan  1 13:09:35.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:09:35.458: INFO: namespace: e2e-tests-configmap-h75c2, resource: bindings, ignored listing per whitelist
Jan  1 13:09:35.548: INFO: namespace e2e-tests-configmap-h75c2 deletion completed in 24.201341754s

• [SLOW TEST:45.574 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:09:35.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f58a12dd-2c97-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 13:09:35.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005" in namespace "e2e-tests-projected-vvmg5" to be "success or failure"
Jan  1 13:09:35.993: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664104ms
Jan  1 13:09:38.081: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09448044s
Jan  1 13:09:40.186: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199116403s
Jan  1 13:09:42.607: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620553437s
Jan  1 13:09:44.650: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663360094s
Jan  1 13:09:46.736: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.749239812s
Jan  1 13:09:49.113: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.126192026s
STEP: Saw pod success
Jan  1 13:09:49.113: INFO: Pod "pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:09:49.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:09:49.587: INFO: Waiting for pod pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:09:49.606: INFO: Pod pod-projected-configmaps-f58d16be-2c97-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:09:49.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vvmg5" for this suite.
Jan  1 13:09:55.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:09:55.954: INFO: namespace: e2e-tests-projected-vvmg5, resource: bindings, ignored listing per whitelist
Jan  1 13:09:56.013: INFO: namespace e2e-tests-projected-vvmg5 deletion completed in 6.310604767s

• [SLOW TEST:20.465 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:09:56.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-xzkrr
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-xzkrr
STEP: Deleting pre-stop pod
Jan  1 13:10:21.412: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:10:21.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-xzkrr" for this suite.
Jan  1 13:11:03.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:11:04.058: INFO: namespace: e2e-tests-prestop-xzkrr, resource: bindings, ignored listing per whitelist
Jan  1 13:11:04.167: INFO: namespace e2e-tests-prestop-xzkrr deletion completed in 42.703189297s

• [SLOW TEST:68.153 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:11:04.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2a43bf02-2c98-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  1 13:11:04.412: INFO: Waiting up to 5m0s for pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005" in namespace "e2e-tests-secrets-5sfnc" to be "success or failure"
Jan  1 13:11:04.502: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.80382ms
Jan  1 13:11:06.627: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215376487s
Jan  1 13:11:08.666: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254236665s
Jan  1 13:11:11.365: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.952612346s
Jan  1 13:11:13.387: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975445403s
Jan  1 13:11:15.402: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.989856276s
STEP: Saw pod success
Jan  1 13:11:15.402: INFO: Pod "pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:11:15.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan  1 13:11:16.112: INFO: Waiting for pod pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:11:16.354: INFO: Pod pod-secrets-2a44ad34-2c98-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:11:16.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5sfnc" for this suite.
Jan  1 13:11:22.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:11:22.665: INFO: namespace: e2e-tests-secrets-5sfnc, resource: bindings, ignored listing per whitelist
Jan  1 13:11:22.710: INFO: namespace e2e-tests-secrets-5sfnc deletion completed in 6.337143442s

• [SLOW TEST:18.543 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:11:22.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-gch7
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 13:11:22.956: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gch7" in namespace "e2e-tests-subpath-6cc5k" to be "success or failure"
Jan  1 13:11:22.975: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.400085ms
Jan  1 13:11:25.334: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37792601s
Jan  1 13:11:27.355: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398609507s
Jan  1 13:11:29.532: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575638594s
Jan  1 13:11:31.573: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616927997s
Jan  1 13:11:33.599: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642778642s
Jan  1 13:11:35.834: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.877538069s
Jan  1 13:11:38.430: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.474247673s
Jan  1 13:11:40.452: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 17.495769782s
Jan  1 13:11:42.498: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 19.541581854s
Jan  1 13:11:44.545: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 21.588609727s
Jan  1 13:11:46.585: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 23.62881199s
Jan  1 13:11:48.649: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 25.692922432s
Jan  1 13:11:50.707: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 27.750527142s
Jan  1 13:11:52.732: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 29.775850104s
Jan  1 13:11:54.748: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 31.79215202s
Jan  1 13:11:56.767: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Running", Reason="", readiness=false. Elapsed: 33.810874763s
Jan  1 13:11:59.779: INFO: Pod "pod-subpath-test-downwardapi-gch7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.823282072s
STEP: Saw pod success
Jan  1 13:11:59.780: INFO: Pod "pod-subpath-test-downwardapi-gch7" satisfied condition "success or failure"
Jan  1 13:12:00.147: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-gch7 container test-container-subpath-downwardapi-gch7: 
STEP: delete the pod
Jan  1 13:12:00.386: INFO: Waiting for pod pod-subpath-test-downwardapi-gch7 to disappear
Jan  1 13:12:00.424: INFO: Pod pod-subpath-test-downwardapi-gch7 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gch7
Jan  1 13:12:00.424: INFO: Deleting pod "pod-subpath-test-downwardapi-gch7" in namespace "e2e-tests-subpath-6cc5k"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:12:00.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6cc5k" for this suite.
Jan  1 13:12:06.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:12:06.762: INFO: namespace: e2e-tests-subpath-6cc5k, resource: bindings, ignored listing per whitelist
Jan  1 13:12:06.839: INFO: namespace e2e-tests-subpath-6cc5k deletion completed in 6.38763417s

• [SLOW TEST:44.129 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:12:06.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-jp9jx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jp9jx to expose endpoints map[]
Jan  1 13:12:07.067: INFO: Get endpoints failed (11.309038ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  1 13:12:08.080: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jp9jx exposes endpoints map[] (1.023965232s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-jp9jx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jp9jx to expose endpoints map[pod1:[100]]
Jan  1 13:12:12.283: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.172738015s elapsed, will retry)
Jan  1 13:12:18.024: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jp9jx exposes endpoints map[pod1:[100]] (9.914497412s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-jp9jx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jp9jx to expose endpoints map[pod1:[100] pod2:[101]]
Jan  1 13:12:23.616: INFO: Unexpected endpoints: found map[503b719c-2c98-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.571010736s elapsed, will retry)
Jan  1 13:12:25.671: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jp9jx exposes endpoints map[pod2:[101] pod1:[100]] (7.625986384s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-jp9jx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jp9jx to expose endpoints map[pod2:[101]]
Jan  1 13:12:27.554: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jp9jx exposes endpoints map[pod2:[101]] (1.871793548s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-jp9jx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jp9jx to expose endpoints map[]
Jan  1 13:12:29.097: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jp9jx exposes endpoints map[] (1.324538113s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:12:29.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-jp9jx" for this suite.
Jan  1 13:12:53.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:12:53.677: INFO: namespace: e2e-tests-services-jp9jx, resource: bindings, ignored listing per whitelist
Jan  1 13:12:53.723: INFO: namespace e2e-tests-services-jp9jx deletion completed in 24.377752988s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.884 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:12:53.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-6bb09a9d-2c98-11ea-8bf6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  1 13:12:54.199: INFO: Waiting up to 5m0s for pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005" in namespace "e2e-tests-configmap-9lf2j" to be "success or failure"
Jan  1 13:12:54.207: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.242358ms
Jan  1 13:12:56.225: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02529345s
Jan  1 13:12:58.248: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048441341s
Jan  1 13:13:00.429: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229247814s
Jan  1 13:13:02.463: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263066544s
Jan  1 13:13:04.498: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298654052s
STEP: Saw pod success
Jan  1 13:13:04.499: INFO: Pod "pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005" satisfied condition "success or failure"
Jan  1 13:13:04.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  1 13:13:04.671: INFO: Waiting for pod pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005 to disappear
Jan  1 13:13:04.680: INFO: Pod pod-configmaps-6bb1d103-2c98-11ea-8bf6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:13:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9lf2j" for this suite.
Jan  1 13:13:10.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:13:10.938: INFO: namespace: e2e-tests-configmap-9lf2j, resource: bindings, ignored listing per whitelist
Jan  1 13:13:10.959: INFO: namespace e2e-tests-configmap-9lf2j deletion completed in 6.269262845s

• [SLOW TEST:17.235 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:13:10.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:14:11.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wzn48" for this suite.
Jan  1 13:14:37.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:37.516: INFO: namespace: e2e-tests-container-probe-wzn48, resource: bindings, ignored listing per whitelist
Jan  1 13:14:37.549: INFO: namespace e2e-tests-container-probe-wzn48 deletion completed in 26.245499113s

• [SLOW TEST:86.589 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:14:37.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  1 13:14:37.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:38.339: INFO: stderr: ""
Jan  1 13:14:38.339: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 13:14:38.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:38.598: INFO: stderr: ""
Jan  1 13:14:38.598: INFO: stdout: "update-demo-nautilus-c6b2s update-demo-nautilus-zzvzf "
Jan  1 13:14:38.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c6b2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:38.798: INFO: stderr: ""
Jan  1 13:14:38.798: INFO: stdout: ""
Jan  1 13:14:38.798: INFO: update-demo-nautilus-c6b2s is created but not running
Jan  1 13:14:43.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:44.179: INFO: stderr: ""
Jan  1 13:14:44.180: INFO: stdout: "update-demo-nautilus-c6b2s update-demo-nautilus-zzvzf "
Jan  1 13:14:44.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c6b2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:44.358: INFO: stderr: ""
Jan  1 13:14:44.358: INFO: stdout: ""
Jan  1 13:14:44.358: INFO: update-demo-nautilus-c6b2s is created but not running
Jan  1 13:14:49.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:49.699: INFO: stderr: ""
Jan  1 13:14:49.699: INFO: stdout: "update-demo-nautilus-c6b2s update-demo-nautilus-zzvzf "
Jan  1 13:14:49.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c6b2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:50.027: INFO: stderr: ""
Jan  1 13:14:50.027: INFO: stdout: ""
Jan  1 13:14:50.027: INFO: update-demo-nautilus-c6b2s is created but not running
Jan  1 13:14:55.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:55.243: INFO: stderr: ""
Jan  1 13:14:55.243: INFO: stdout: "update-demo-nautilus-c6b2s update-demo-nautilus-zzvzf "
Jan  1 13:14:55.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c6b2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:55.392: INFO: stderr: ""
Jan  1 13:14:55.392: INFO: stdout: "true"
Jan  1 13:14:55.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c6b2s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:55.541: INFO: stderr: ""
Jan  1 13:14:55.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 13:14:55.541: INFO: validating pod update-demo-nautilus-c6b2s
Jan  1 13:14:55.556: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 13:14:55.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 13:14:55.556: INFO: update-demo-nautilus-c6b2s is verified up and running
Jan  1 13:14:55.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zzvzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:55.722: INFO: stderr: ""
Jan  1 13:14:55.722: INFO: stdout: "true"
Jan  1 13:14:55.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zzvzf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:55.877: INFO: stderr: ""
Jan  1 13:14:55.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 13:14:55.878: INFO: validating pod update-demo-nautilus-zzvzf
Jan  1 13:14:55.911: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 13:14:55.912: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 13:14:55.912: INFO: update-demo-nautilus-zzvzf is verified up and running
STEP: using delete to clean up resources
Jan  1 13:14:55.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:56.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 13:14:56.111: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  1 13:14:56.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rdgpq'
Jan  1 13:14:56.368: INFO: stderr: "No resources found.\n"
Jan  1 13:14:56.368: INFO: stdout: ""
Jan  1 13:14:56.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-rdgpq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 13:14:56.744: INFO: stderr: ""
Jan  1 13:14:56.744: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:14:56.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rdgpq" for this suite.
Jan  1 13:15:22.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:15:23.121: INFO: namespace: e2e-tests-kubectl-rdgpq, resource: bindings, ignored listing per whitelist
Jan  1 13:15:23.289: INFO: namespace e2e-tests-kubectl-rdgpq deletion completed in 26.515158279s

• [SLOW TEST:45.740 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  1 13:15:23.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-drmmp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 13:15:23.507: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 13:15:57.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-drmmp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 13:15:57.891: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 13:15:58.389: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  1 13:15:58.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-drmmp" for this suite.
Jan  1 13:16:24.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:16:24.648: INFO: namespace: e2e-tests-pod-network-test-drmmp, resource: bindings, ignored listing per whitelist
Jan  1 13:16:24.650: INFO: namespace e2e-tests-pod-network-test-drmmp deletion completed in 26.231320247s

• [SLOW TEST:61.360 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSJan  1 13:16:24.650: INFO: Running AfterSuite actions on all nodes
Jan  1 13:16:24.650: INFO: Running AfterSuite actions on node 1
Jan  1 13:16:24.650: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8945.344 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS