I1222 10:47:04.549512 8 e2e.go:224] Starting e2e run "638eeebb-24a8-11ea-b023-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577011623 - Will randomize all specs Will run 201 of 2164 specs Dec 22 10:47:05.311: INFO: >>> kubeConfig: /root/.kube/config Dec 22 10:47:05.318: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 22 10:47:05.347: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 22 10:47:05.387: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 22 10:47:05.387: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 22 10:47:05.387: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 22 10:47:05.398: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 22 10:47:05.398: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 22 10:47:05.398: INFO: e2e test version: v1.13.12 Dec 22 10:47:05.399: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:47:05.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Dec 22 10:47:05.576: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 10:47:05.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-fdxnj" to be "success or failure" Dec 22 10:47:05.609: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.653121ms Dec 22 10:47:07.803: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218620861s Dec 22 10:47:09.815: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231056548s Dec 22 10:47:13.226: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.641670878s Dec 22 10:47:15.238: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.653601701s Dec 22 10:47:17.257: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.672765123s STEP: Saw pod success Dec 22 10:47:17.257: INFO: Pod "downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:47:17.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 10:47:17.458: INFO: Waiting for pod downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005 to disappear Dec 22 10:47:17.468: INFO: Pod downwardapi-volume-64ff4b59-24a8-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:47:17.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fdxnj" for this suite. Dec 22 10:47:23.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:47:23.655: INFO: namespace: e2e-tests-downward-api-fdxnj, resource: bindings, ignored listing per whitelist Dec 22 10:47:23.700: INFO: namespace e2e-tests-downward-api-fdxnj deletion completed in 6.223348043s • [SLOW TEST:18.300 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:47:23.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005 Dec 22 10:47:24.170: INFO: Pod name my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005: Found 0 pods out of 1 Dec 22 10:47:30.622: INFO: Pod name my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005: Found 1 pods out of 1 Dec 22 10:47:30.622: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005" are running Dec 22 10:47:34.667: INFO: Pod "my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005-5kqcf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 10:47:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 10:47:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 10:47:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 10:47:24 +0000 UTC Reason: Message:}]) Dec 22 10:47:34.667: INFO: Trying to dial the pod Dec 22 10:47:39.704: INFO: Controller my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005: Got expected result from replica 1 [my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005-5kqcf]: "my-hostname-basic-700baeb9-24a8-11ea-b023-0242ac110005-5kqcf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:47:39.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-x5zq4" for this suite. Dec 22 10:47:45.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:47:46.010: INFO: namespace: e2e-tests-replication-controller-x5zq4, resource: bindings, ignored listing per whitelist Dec 22 10:47:46.026: INFO: namespace e2e-tests-replication-controller-x5zq4 deletion completed in 6.311462367s • [SLOW TEST:22.326 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:47:46.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 10:47:46.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-m56bz" to be "success or failure" Dec 22 10:47:46.264: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.798891ms Dec 22 10:47:48.281: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026991475s Dec 22 10:47:50.626: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372114341s Dec 22 10:47:52.765: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511153985s Dec 22 10:47:55.333: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.078802808s Dec 22 10:47:57.384: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.130046978s Dec 22 10:47:59.408: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.153676204s STEP: Saw pod success Dec 22 10:47:59.408: INFO: Pod "downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:47:59.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 10:47:59.501: INFO: Waiting for pod downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005 to disappear Dec 22 10:47:59.580: INFO: Pod downwardapi-volume-7d3a080b-24a8-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:47:59.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m56bz" for this suite. Dec 22 10:48:05.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:48:05.688: INFO: namespace: e2e-tests-projected-m56bz, resource: bindings, ignored listing per whitelist Dec 22 10:48:05.750: INFO: namespace e2e-tests-projected-m56bz deletion completed in 6.161870649s • [SLOW TEST:19.723 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:48:05.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 22 10:48:06.092: INFO: Waiting up to 5m0s for pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-wm24p" to be "success or failure" Dec 22 10:48:06.180: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 88.520415ms Dec 22 10:48:08.192: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100151029s Dec 22 10:48:10.217: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124617305s Dec 22 10:48:12.447: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355509287s Dec 22 10:48:14.520: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.428304391s Dec 22 10:48:16.561: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468807991s STEP: Saw pod success Dec 22 10:48:16.561: INFO: Pod "pod-890ebfb8-24a8-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:48:16.592: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-890ebfb8-24a8-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 10:48:16.900: INFO: Waiting for pod pod-890ebfb8-24a8-11ea-b023-0242ac110005 to disappear Dec 22 10:48:16.916: INFO: Pod pod-890ebfb8-24a8-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:48:16.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wm24p" for this suite. Dec 22 10:48:22.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:48:23.069: INFO: namespace: e2e-tests-emptydir-wm24p, resource: bindings, ignored listing per whitelist Dec 22 10:48:23.081: INFO: namespace e2e-tests-emptydir-wm24p deletion completed in 6.151952951s • [SLOW TEST:17.331 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:48:23.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xhjq2 Dec 22 10:48:33.306: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xhjq2 STEP: checking the pod's current state and verifying that restartCount is present Dec 22 10:48:33.313: INFO: Initial restart count of pod liveness-http is 0 Dec 22 10:48:54.649: INFO: Restart count of pod e2e-tests-container-probe-xhjq2/liveness-http is now 1 (21.33659006s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:48:54.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xhjq2" for this suite. Dec 22 10:49:00.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:49:01.027: INFO: namespace: e2e-tests-container-probe-xhjq2, resource: bindings, ignored listing per whitelist Dec 22 10:49:01.055: INFO: namespace e2e-tests-container-probe-xhjq2 deletion completed in 6.288219201s • [SLOW TEST:37.973 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:49:01.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 22 10:49:23.386: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:23.457: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:25.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:25.477: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:27.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:27.474: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:29.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:29.473: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:31.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:31.475: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:33.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:33.478: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:35.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:35.472: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:37.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:37.477: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:39.459: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:39.480: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:41.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:41.985: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:43.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:43.767: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:45.459: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:45.477: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:47.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:47.473: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:49.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:49.480: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:51.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:51.479: INFO: Pod pod-with-prestop-exec-hook still exists Dec 22 10:49:53.458: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 22 10:49:53.505: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:49:53.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pg9nl" for this suite. Dec 22 10:50:19.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:50:19.633: INFO: namespace: e2e-tests-container-lifecycle-hook-pg9nl, resource: bindings, ignored listing per whitelist Dec 22 10:50:19.839: INFO: namespace e2e-tests-container-lifecycle-hook-pg9nl deletion completed in 26.287709334s • [SLOW TEST:78.779 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:50:19.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-t7nhn/secret-test-d8f5d1b6-24a8-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 10:50:20.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-t7nhn" to be "success or failure" Dec 22 10:50:20.164: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.496983ms Dec 22 10:50:22.238: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084901206s Dec 22 10:50:24.259: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105708343s Dec 22 10:50:26.930: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.776598572s Dec 22 10:50:28.964: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810764118s Dec 22 10:50:31.058: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.904861187s STEP: Saw pod success Dec 22 10:50:31.059: INFO: Pod "pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:50:31.078: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005 container env-test: STEP: delete the pod Dec 22 10:50:31.216: INFO: Waiting for pod pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005 to disappear Dec 22 10:50:31.231: INFO: Pod pod-configmaps-d8f692be-24a8-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:50:31.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t7nhn" for this suite. Dec 22 10:50:37.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:50:37.481: INFO: namespace: e2e-tests-secrets-t7nhn, resource: bindings, ignored listing per whitelist Dec 22 10:50:37.553: INFO: namespace e2e-tests-secrets-t7nhn deletion completed in 6.314638104s • [SLOW TEST:17.714 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:50:37.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 22 10:50:38.001: INFO: Waiting up to 5m0s for pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-hs4jp" to be "success or failure" Dec 22 10:50:38.036: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.901052ms Dec 22 10:50:40.059: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057513045s Dec 22 10:50:42.088: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085947249s Dec 22 10:50:44.255: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252921736s Dec 22 10:50:46.281: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278854088s Dec 22 10:50:48.294: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292066049s STEP: Saw pod success Dec 22 10:50:48.294: INFO: Pod "downward-api-e383bcdc-24a8-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:50:48.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e383bcdc-24a8-11ea-b023-0242ac110005 container dapi-container: STEP: delete the pod Dec 22 10:50:48.544: INFO: Waiting for pod downward-api-e383bcdc-24a8-11ea-b023-0242ac110005 to disappear Dec 22 10:50:48.564: INFO: Pod downward-api-e383bcdc-24a8-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:50:48.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hs4jp" for this suite. Dec 22 10:50:56.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:50:56.817: INFO: namespace: e2e-tests-downward-api-hs4jp, resource: bindings, ignored listing per whitelist Dec 22 10:50:56.858: INFO: namespace e2e-tests-downward-api-hs4jp deletion completed in 8.280455332s • [SLOW TEST:19.304 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:50:56.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Dec 22 10:50:57.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d8fc2' Dec 22 10:50:59.782: INFO: stderr: "" Dec 22 10:50:59.782: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Dec 22 10:51:00.795: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:00.796: INFO: Found 0 / 1 Dec 22 10:51:01.867: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:01.868: INFO: Found 0 / 1 Dec 22 10:51:02.975: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:02.975: INFO: Found 0 / 1 Dec 22 10:51:03.816: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:03.816: INFO: Found 0 / 1 Dec 22 10:51:04.800: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:04.800: INFO: Found 0 / 1 Dec 22 10:51:06.594: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:06.594: INFO: Found 0 / 1 Dec 22 10:51:06.945: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:06.946: INFO: Found 0 / 1 Dec 22 10:51:07.798: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:07.799: INFO: Found 0 / 1 Dec 22 10:51:08.789: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:08.789: INFO: Found 0 / 1 Dec 22 10:51:09.801: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:09.801: INFO: Found 0 / 1 Dec 22 10:51:10.802: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:10.802: INFO: Found 1 / 1 Dec 22 10:51:10.802: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 22 10:51:10.810: INFO: Selector matched 1 pods for map[app:redis] Dec 22 10:51:10.810: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 22 10:51:10.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2' Dec 22 10:51:11.128: INFO: stderr: "" Dec 22 10:51:11.128: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Dec 10:51:08.614 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 10:51:08.614 # Server started, Redis version 3.2.12\n1:M 22 Dec 10:51:08.615 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 10:51:08.615 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 22 10:51:11.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2 --tail=1' Dec 22 10:51:11.297: INFO: stderr: "" Dec 22 10:51:11.297: INFO: stdout: "1:M 22 Dec 10:51:08.615 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 22 10:51:11.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2 --limit-bytes=1' Dec 22 10:51:11.469: INFO: stderr: "" Dec 22 10:51:11.470: INFO: stdout: " " STEP: exposing timestamps Dec 22 10:51:11.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2 --tail=1 --timestamps' Dec 22 10:51:11.618: INFO: stderr: "" Dec 22 10:51:11.618: INFO: stdout: "2019-12-22T10:51:08.616238979Z 1:M 22 Dec 10:51:08.615 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 22 10:51:14.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2 --since=1s' Dec 22 10:51:14.288: INFO: stderr: "" Dec 22 10:51:14.288: INFO: stdout: "" Dec 22 10:51:14.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-9ccxz redis-master --namespace=e2e-tests-kubectl-d8fc2 --since=24h' Dec 22 10:51:14.504: INFO: stderr: "" Dec 22 10:51:14.504: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Dec 10:51:08.614 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 10:51:08.614 # Server started, Redis version 3.2.12\n1:M 22 Dec 10:51:08.615 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 10:51:08.615 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Dec 22 10:51:14.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-d8fc2' Dec 22 10:51:14.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 22 10:51:14.727: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 22 10:51:14.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-d8fc2' Dec 22 10:51:15.136: INFO: stderr: "No resources found.\n" Dec 22 10:51:15.137: INFO: stdout: "" Dec 22 10:51:15.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-d8fc2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 22 10:51:15.339: INFO: stderr: "" Dec 22 10:51:15.339: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:51:15.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d8fc2" for this suite. Dec 22 10:51:37.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:51:37.551: INFO: namespace: e2e-tests-kubectl-d8fc2, resource: bindings, ignored listing per whitelist Dec 22 10:51:37.569: INFO: namespace e2e-tests-kubectl-d8fc2 deletion completed in 22.208716221s • [SLOW TEST:40.711 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:51:37.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-073d6741-24a9-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 10:51:37.839: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-j4zdm" to be "success or failure" Dec 22 10:51:37.862: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.996583ms Dec 22 10:51:40.931: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.091965601s Dec 22 10:51:42.967: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.128122282s Dec 22 10:51:45.274: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.435214735s Dec 22 10:51:47.321: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.481929128s Dec 22 10:51:49.332: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.493098102s STEP: Saw pod success Dec 22 10:51:49.332: INFO: Pod "pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:51:49.336: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 22 10:51:49.459: INFO: Waiting for pod pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:51:49.480: INFO: Pod pod-projected-configmaps-073f1d43-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:51:49.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j4zdm" for this suite. Dec 22 10:51:55.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:51:55.748: INFO: namespace: e2e-tests-projected-j4zdm, resource: bindings, ignored listing per whitelist Dec 22 10:51:55.770: INFO: namespace e2e-tests-projected-j4zdm deletion completed in 6.276102514s • [SLOW TEST:18.201 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:51:55.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 10:51:56.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-6h2h8" to be "success or failure" Dec 22 10:51:56.235: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.940276ms Dec 22 10:51:58.682: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550767513s Dec 22 10:52:00.705: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573850864s Dec 22 10:52:03.343: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.211863175s Dec 22 10:52:05.363: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231159913s Dec 22 10:52:07.377: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.245054776s STEP: Saw pod success Dec 22 10:52:07.377: INFO: Pod "downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:52:07.383: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 10:52:07.522: INFO: Waiting for pod downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:52:07.531: INFO: Pod downwardapi-volume-122b6ef6-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:52:07.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6h2h8" for this suite. Dec 22 10:52:14.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:52:14.955: INFO: namespace: e2e-tests-projected-6h2h8, resource: bindings, ignored listing per whitelist Dec 22 10:52:15.148: INFO: namespace e2e-tests-projected-6h2h8 deletion completed in 7.607030394s • [SLOW TEST:19.377 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:52:15.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 22 10:52:15.342: INFO: Waiting up to 5m0s for pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-6d88n" to be "success or failure" Dec 22 10:52:15.354: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.869449ms Dec 22 10:52:17.468: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125838012s Dec 22 10:52:19.481: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138542221s Dec 22 10:52:22.097: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.755237883s Dec 22 10:52:24.113: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.770596318s Dec 22 10:52:26.150: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.807851122s Dec 22 10:52:28.175: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.833096483s STEP: Saw pod success Dec 22 10:52:28.175: INFO: Pod "pod-1d9ec4b2-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:52:28.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1d9ec4b2-24a9-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 10:52:28.787: INFO: Waiting for pod pod-1d9ec4b2-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:52:28.794: INFO: Pod pod-1d9ec4b2-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:52:28.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6d88n" for this suite. Dec 22 10:52:34.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:52:35.181: INFO: namespace: e2e-tests-emptydir-6d88n, resource: bindings, ignored listing per whitelist Dec 22 10:52:35.184: INFO: namespace e2e-tests-emptydir-6d88n deletion completed in 6.381815222s • [SLOW TEST:20.036 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:52:35.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 10:52:35.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-m4nsl" to be "success or failure" Dec 22 10:52:35.428: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.973254ms Dec 22 10:52:37.984: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582138423s Dec 22 10:52:40.017: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.615608053s Dec 22 10:52:42.218: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.816219968s Dec 22 10:52:44.236: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834090538s Dec 22 10:52:46.309: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907397881s STEP: Saw pod success Dec 22 10:52:46.309: INFO: Pod "downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:52:46.319: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 10:52:46.576: INFO: Waiting for pod downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:52:46.655: INFO: Pod downwardapi-volume-298f306a-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:52:46.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m4nsl" for this suite. Dec 22 10:52:54.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:52:54.979: INFO: namespace: e2e-tests-projected-m4nsl, resource: bindings, ignored listing per whitelist Dec 22 10:52:54.999: INFO: namespace e2e-tests-projected-m4nsl deletion completed in 8.329770974s • [SLOW TEST:19.815 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:52:55.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3569b040-24a9-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 10:52:55.264: INFO: Waiting up to 5m0s for pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-8bznd" to be "success or failure" Dec 22 10:52:55.274: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204883ms Dec 22 10:52:57.291: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026438024s Dec 22 10:52:59.313: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049387236s Dec 22 10:53:02.011: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.746764585s Dec 22 10:53:04.032: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767663467s Dec 22 10:53:06.048: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.783832401s Dec 22 10:53:08.256: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.992272423s STEP: Saw pod success Dec 22 10:53:08.257: INFO: Pod "pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:53:08.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 22 10:53:08.479: INFO: Waiting for pod pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:53:08.554: INFO: Pod pod-configmaps-356a35d5-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:53:08.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8bznd" for this suite. Dec 22 10:53:14.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:53:15.056: INFO: namespace: e2e-tests-configmap-8bznd, resource: bindings, ignored listing per whitelist Dec 22 10:53:15.063: INFO: namespace e2e-tests-configmap-8bznd deletion completed in 6.362491879s • [SLOW TEST:20.064 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:53:15.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-415635ca-24a9-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 10:53:15.262: INFO: Waiting up to 5m0s for pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-ndwf5" to be "success or failure" Dec 22 10:53:15.270: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.125975ms Dec 22 10:53:17.352: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089711101s Dec 22 10:53:19.373: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110822421s Dec 22 10:53:22.074: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81180148s Dec 22 10:53:24.107: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844926386s Dec 22 10:53:26.124: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.861176952s STEP: Saw pod success Dec 22 10:53:26.124: INFO: Pod "pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 10:53:26.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 22 10:53:26.311: INFO: Waiting for pod pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005 to disappear Dec 22 10:53:26.333: INFO: Pod pod-configmaps-4157039c-24a9-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:53:26.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ndwf5" for this suite. Dec 22 10:53:32.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:53:32.725: INFO: namespace: e2e-tests-configmap-ndwf5, resource: bindings, ignored listing per whitelist Dec 22 10:53:32.769: INFO: namespace e2e-tests-configmap-ndwf5 deletion completed in 6.424372511s • [SLOW TEST:17.706 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:53:32.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 10:53:32.977: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 10:53:43.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kz9qd" for this suite. Dec 22 10:54:37.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 10:54:37.393: INFO: namespace: e2e-tests-pods-kz9qd, resource: bindings, ignored listing per whitelist Dec 22 10:54:37.534: INFO: namespace e2e-tests-pods-kz9qd deletion completed in 54.268822815s • [SLOW TEST:64.765 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 10:54:37.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 22 10:54:38.541: INFO: Pod name wrapped-volume-race-72e53e68-24a9-11ea-b023-0242ac110005: Found 0 pods out of 5 Dec 22 10:54:43.571: INFO: Pod name wrapped-volume-race-72e53e68-24a9-11ea-b023-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-72e53e68-24a9-11ea-b023-0242ac110005 in namespace e2e-tests-emptydir-wrapper-p7zgs, will wait for the garbage collector to delete the pods Dec 22 10:56:37.943: INFO: Deleting ReplicationController wrapped-volume-race-72e53e68-24a9-11ea-b023-0242ac110005 took: 20.62795ms Dec 22 10:56:39.644: INFO: Terminating ReplicationController wrapped-volume-race-72e53e68-24a9-11ea-b023-0242ac110005 pods took: 1.700811529s STEP: Creating RC which spawns configmap-volume pods Dec 22 10:57:24.906: INFO: Pod name wrapped-volume-race-d60cae41-24a9-11ea-b023-0242ac110005: Found 0 pods out of 5 Dec 22 10:57:29.940: INFO: Pod name wrapped-volume-race-d60cae41-24a9-11ea-b023-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d60cae41-24a9-11ea-b023-0242ac110005 in namespace e2e-tests-emptydir-wrapper-p7zgs, will wait for the garbage collector to delete the pods Dec 22 10:59:34.227: INFO: Deleting ReplicationController wrapped-volume-race-d60cae41-24a9-11ea-b023-0242ac110005 took: 86.842489ms Dec 22 10:59:34.828: INFO: Terminating ReplicationController wrapped-volume-race-d60cae41-24a9-11ea-b023-0242ac110005 pods took: 600.81143ms STEP: Creating RC which spawns configmap-volume pods Dec 22 11:00:23.721: INFO: Pod name wrapped-volume-race-40ac333a-24aa-11ea-b023-0242ac110005: Found 0 pods out of 5 Dec 22 11:00:28.740: INFO: Pod name wrapped-volume-race-40ac333a-24aa-11ea-b023-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-40ac333a-24aa-11ea-b023-0242ac110005 in namespace e2e-tests-emptydir-wrapper-p7zgs, will wait for the garbage collector to delete the pods Dec 22 11:02:35.335: INFO: Deleting ReplicationController wrapped-volume-race-40ac333a-24aa-11ea-b023-0242ac110005 took: 32.420284ms Dec 22 11:02:35.636: INFO: Terminating ReplicationController wrapped-volume-race-40ac333a-24aa-11ea-b023-0242ac110005 pods took: 301.0861ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:03:24.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-p7zgs" for this suite. Dec 22 11:03:32.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:03:33.166: INFO: namespace: e2e-tests-emptydir-wrapper-p7zgs, resource: bindings, ignored listing per whitelist Dec 22 11:03:33.173: INFO: namespace e2e-tests-emptydir-wrapper-p7zgs deletion completed in 8.234827628s • [SLOW TEST:535.639 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:03:33.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-xf2z9/configmap-test-b1e35f5b-24aa-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 11:03:33.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-xf2z9" to be "success or failure" Dec 22 11:03:33.649: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.65278ms Dec 22 11:03:35.977: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355607602s Dec 22 11:03:39.182: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.560874142s Dec 22 11:03:41.193: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.572003838s Dec 22 11:03:43.832: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.21089161s Dec 22 11:03:45.861: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.239528504s Dec 22 11:03:48.044: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.422762752s Dec 22 11:03:50.645: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.023233053s STEP: Saw pod success Dec 22 11:03:50.645: INFO: Pod "pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:03:50.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005 container env-test: STEP: delete the pod Dec 22 11:03:51.162: INFO: Waiting for pod pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005 to disappear Dec 22 11:03:51.181: INFO: Pod pod-configmaps-b1e69e20-24aa-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:03:51.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xf2z9" for this suite. Dec 22 11:03:59.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:03:59.337: INFO: namespace: e2e-tests-configmap-xf2z9, resource: bindings, ignored listing per whitelist Dec 22 11:03:59.354: INFO: namespace e2e-tests-configmap-xf2z9 deletion completed in 8.1607003s • [SLOW TEST:26.181 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:03:59.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:03:59.561: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 22 11:03:59.656: INFO: Number of nodes with available pods: 0 Dec 22 11:03:59.656: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 22 11:03:59.703: INFO: Number of nodes with available pods: 0 Dec 22 11:03:59.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:00.716: INFO: Number of nodes with available pods: 0 Dec 22 11:04:00.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:01.717: INFO: Number of nodes with available pods: 0 Dec 22 11:04:01.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:02.747: INFO: Number of nodes with available pods: 0 Dec 22 11:04:02.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:03.733: INFO: Number of nodes with available pods: 0 Dec 22 11:04:03.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:04.713: INFO: Number of nodes with available pods: 0 Dec 22 11:04:04.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:06.488: INFO: Number of nodes with available pods: 0 Dec 22 11:04:06.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:06.713: INFO: Number of nodes with available pods: 0 Dec 22 11:04:06.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:07.714: INFO: Number of nodes with available pods: 0 Dec 22 11:04:07.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:08.718: INFO: Number of nodes with available pods: 0 Dec 22 11:04:08.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:09.713: INFO: Number of nodes with available pods: 1 Dec 22 11:04:09.713: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 22 11:04:09.772: INFO: Number of nodes with available pods: 1 Dec 22 11:04:09.773: INFO: Number of running nodes: 0, number of available pods: 1 Dec 22 11:04:10.788: INFO: Number of nodes with available pods: 0 Dec 22 11:04:10.788: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 22 11:04:10.819: INFO: Number of nodes with available pods: 0 Dec 22 11:04:10.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:11.830: INFO: Number of nodes with available pods: 0 Dec 22 11:04:11.830: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:12.891: INFO: Number of nodes with available pods: 0 Dec 22 11:04:12.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:13.838: INFO: Number of nodes with available pods: 0 Dec 22 11:04:13.838: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:15.457: INFO: Number of nodes with available pods: 0 Dec 22 11:04:15.457: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:15.834: INFO: Number of nodes with available pods: 0 Dec 22 11:04:15.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:16.851: INFO: Number of nodes with available pods: 0 Dec 22 11:04:16.851: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:17.835: INFO: Number of nodes with available pods: 0 Dec 22 11:04:17.836: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:18.839: INFO: Number of nodes with available pods: 0 Dec 22 11:04:18.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:21.205: INFO: Number of nodes with available pods: 0 Dec 22 11:04:21.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:21.831: INFO: Number of nodes with available pods: 0 Dec 22 11:04:21.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:22.840: INFO: Number of nodes with available pods: 0 Dec 22 11:04:22.841: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:24.166: INFO: Number of nodes with available pods: 0 Dec 22 11:04:24.166: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:25.453: INFO: Number of nodes with available pods: 0 Dec 22 11:04:25.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:25.852: INFO: Number of nodes with available pods: 0 Dec 22 11:04:25.852: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:26.975: INFO: Number of nodes with available pods: 0 Dec 22 11:04:26.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:28.504: INFO: Number of nodes with available pods: 0 Dec 22 11:04:28.504: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:29.370: INFO: Number of nodes with available pods: 0 Dec 22 11:04:29.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:30.421: INFO: Number of nodes with available pods: 0 Dec 22 11:04:30.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:30.874: INFO: Number of nodes with available pods: 0 Dec 22 11:04:30.874: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:31.835: INFO: Number of nodes with available pods: 0 Dec 22 11:04:31.835: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:04:32.853: INFO: Number of nodes with available pods: 1 Dec 22 11:04:32.853: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4pjkt, will wait for the garbage collector to delete the pods Dec 22 11:04:32.957: INFO: Deleting DaemonSet.extensions daemon-set took: 15.180705ms Dec 22 11:04:33.057: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.751209ms Dec 22 11:04:42.669: INFO: Number of nodes with available pods: 0 Dec 22 11:04:42.669: INFO: Number of running nodes: 0, number of available pods: 0 Dec 22 11:04:42.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4pjkt/daemonsets","resourceVersion":"15668351"},"items":null} Dec 22 11:04:42.683: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4pjkt/pods","resourceVersion":"15668351"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:04:42.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4pjkt" for this suite. Dec 22 11:04:50.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:04:50.945: INFO: namespace: e2e-tests-daemonsets-4pjkt, resource: bindings, ignored listing per whitelist Dec 22 11:04:50.983: INFO: namespace e2e-tests-daemonsets-4pjkt deletion completed in 8.255109914s • [SLOW TEST:51.628 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:04:50.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-d7xfj [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-d7xfj STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-d7xfj STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-d7xfj STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-d7xfj Dec 22 11:05:03.623: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-d7xfj, name: ss-0, uid: e73a751d-24aa-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Dec 22 11:05:12.463: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-d7xfj, name: ss-0, uid: e73a751d-24aa-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 22 11:05:12.505: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-d7xfj, name: ss-0, uid: e73a751d-24aa-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 22 11:05:12.701: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-d7xfj STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-d7xfj STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-d7xfj and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 22 11:05:28.502: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d7xfj Dec 22 11:05:28.512: INFO: Scaling statefulset ss to 0 Dec 22 11:05:38.587: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 11:05:38.604: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:05:38.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-d7xfj" for this suite. Dec 22 11:05:46.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:05:46.952: INFO: namespace: e2e-tests-statefulset-d7xfj, resource: bindings, ignored listing per whitelist Dec 22 11:05:47.075: INFO: namespace e2e-tests-statefulset-d7xfj deletion completed in 8.415735529s • [SLOW TEST:56.092 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:05:47.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 22 11:05:47.431: INFO: Waiting up to 5m0s for pod "pod-01aa885b-24ab-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-9kbgm" to be "success or failure" Dec 22 11:05:47.440: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.773207ms Dec 22 11:05:50.810: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.379008894s Dec 22 11:05:52.819: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388091297s Dec 22 11:05:54.840: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.409563277s Dec 22 11:05:56.867: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436165755s Dec 22 11:05:58.904: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.473410153s Dec 22 11:06:00.928: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.49771318s STEP: Saw pod success Dec 22 11:06:00.929: INFO: Pod "pod-01aa885b-24ab-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:06:00.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-01aa885b-24ab-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 11:06:01.008: INFO: Waiting for pod pod-01aa885b-24ab-11ea-b023-0242ac110005 to disappear Dec 22 11:06:01.019: INFO: Pod pod-01aa885b-24ab-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:06:01.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9kbgm" for this suite. Dec 22 11:06:07.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:06:07.112: INFO: namespace: e2e-tests-emptydir-9kbgm, resource: bindings, ignored listing per whitelist Dec 22 11:06:07.244: INFO: namespace e2e-tests-emptydir-9kbgm deletion completed in 6.213821251s • [SLOW TEST:20.168 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:06:07.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Dec 22 11:06:07.666: INFO: Waiting up to 5m0s for pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005" in namespace "e2e-tests-containers-mp58d" to be "success or failure" Dec 22 11:06:07.699: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.973047ms Dec 22 11:06:09.722: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056048331s Dec 22 11:06:11.753: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086740394s Dec 22 11:06:14.104: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437224372s Dec 22 11:06:16.124: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457955089s Dec 22 11:06:18.200: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.533501695s STEP: Saw pod success Dec 22 11:06:18.200: INFO: Pod "client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:06:18.290: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 11:06:18.434: INFO: Waiting for pod client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005 to disappear Dec 22 11:06:18.440: INFO: Pod client-containers-0dacb5c8-24ab-11ea-b023-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:06:18.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mp58d" for this suite. Dec 22 11:06:26.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:06:26.719: INFO: namespace: e2e-tests-containers-mp58d, resource: bindings, ignored listing per whitelist Dec 22 11:06:26.751: INFO: namespace e2e-tests-containers-mp58d deletion completed in 8.304155896s • [SLOW TEST:19.506 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:06:26.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:06:39.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-m689t" for this suite. Dec 22 11:07:35.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:07:35.185: INFO: namespace: e2e-tests-kubelet-test-m689t, resource: bindings, ignored listing per whitelist Dec 22 11:07:35.366: INFO: namespace e2e-tests-kubelet-test-m689t deletion completed in 56.314438576s • [SLOW TEST:68.615 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:07:35.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-4233d989-24ab-11ea-b023-0242ac110005 STEP: Creating secret with name s-test-opt-upd-4233dab7-24ab-11ea-b023-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4233d989-24ab-11ea-b023-0242ac110005 STEP: Updating secret s-test-opt-upd-4233dab7-24ab-11ea-b023-0242ac110005 STEP: Creating secret with name s-test-opt-create-4233db2b-24ab-11ea-b023-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:09:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zkg96" for this suite. Dec 22 11:09:39.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:09:39.402: INFO: namespace: e2e-tests-projected-zkg96, resource: bindings, ignored listing per whitelist Dec 22 11:09:39.408: INFO: namespace e2e-tests-projected-zkg96 deletion completed in 24.329129445s • [SLOW TEST:124.041 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:09:39.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 22 11:09:39.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7pt8x' Dec 22 11:09:42.571: INFO: stderr: "" Dec 22 11:09:42.571: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Dec 22 11:09:42.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7pt8x' Dec 22 11:09:51.317: INFO: stderr: "" Dec 22 11:09:51.317: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:09:51.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7pt8x" for this suite. Dec 22 11:09:59.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:09:59.448: INFO: namespace: e2e-tests-kubectl-7pt8x, resource: bindings, ignored listing per whitelist Dec 22 11:09:59.478: INFO: namespace e2e-tests-kubectl-7pt8x deletion completed in 8.148206974s • [SLOW TEST:20.070 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:09:59.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Dec 22 11:09:59.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qlh4s run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 22 11:10:14.260: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 22 11:10:14.260: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:10:16.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qlh4s" for this suite. Dec 22 11:10:22.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:10:22.422: INFO: namespace: e2e-tests-kubectl-qlh4s, resource: bindings, ignored listing per whitelist Dec 22 11:10:22.509: INFO: namespace e2e-tests-kubectl-qlh4s deletion completed in 6.22415468s • [SLOW TEST:23.032 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:10:22.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 22 11:10:53.384: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:53.384: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:54.218: INFO: Exec stderr: "" Dec 22 11:10:54.218: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:54.218: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:54.756: INFO: Exec stderr: "" Dec 22 11:10:54.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:54.757: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:55.115: INFO: Exec stderr: "" Dec 22 11:10:55.115: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:55.115: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:55.501: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 22 11:10:55.502: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:55.502: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:55.994: INFO: Exec stderr: "" Dec 22 11:10:55.994: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:55.994: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:56.445: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 22 11:10:56.445: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:56.445: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:57.130: INFO: Exec stderr: "" Dec 22 11:10:57.130: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:57.130: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:57.436: INFO: Exec stderr: "" Dec 22 11:10:57.437: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:57.437: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:57.739: INFO: Exec stderr: "" Dec 22 11:10:57.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xmp4k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:10:57.740: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:10:58.246: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:10:58.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-xmp4k" for this suite. Dec 22 11:11:56.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:11:56.613: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-xmp4k, resource: bindings, ignored listing per whitelist Dec 22 11:11:56.660: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-xmp4k deletion completed in 58.398933072s • [SLOW TEST:94.150 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:11:56.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cbp42 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 22 11:11:56.819: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 22 11:12:38.148: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cbp42 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:12:38.148: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:12:38.636: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:12:38.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cbp42" for this suite. Dec 22 11:13:02.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:13:02.764: INFO: namespace: e2e-tests-pod-network-test-cbp42, resource: bindings, ignored listing per whitelist Dec 22 11:13:02.791: INFO: namespace e2e-tests-pod-network-test-cbp42 deletion completed in 24.141232242s • [SLOW TEST:66.130 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:13:02.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Dec 22 11:13:02.996: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-wqblm" to be "success or failure" Dec 22 11:13:03.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.325797ms Dec 22 11:13:05.522: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.525961849s Dec 22 11:13:07.534: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538679648s Dec 22 11:13:09.547: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.551678359s Dec 22 11:13:11.762: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.766753162s Dec 22 11:13:14.752: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.756258726s Dec 22 11:13:17.982: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.98646643s Dec 22 11:13:20.539: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.542963024s Dec 22 11:13:22.570: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.573985372s STEP: Saw pod success Dec 22 11:13:22.570: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 22 11:13:22.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 22 11:13:23.065: INFO: Waiting for pod pod-host-path-test to disappear Dec 22 11:13:23.077: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:13:23.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-wqblm" for this suite. Dec 22 11:13:29.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:13:29.330: INFO: namespace: e2e-tests-hostpath-wqblm, resource: bindings, ignored listing per whitelist Dec 22 11:13:29.348: INFO: namespace e2e-tests-hostpath-wqblm deletion completed in 6.259049253s • [SLOW TEST:26.557 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:13:29.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jxclq Dec 22 11:13:39.548: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jxclq STEP: checking the pod's current state and verifying that restartCount is present Dec 22 11:13:39.556: INFO: Initial restart count of pod liveness-exec is 0 Dec 22 11:14:30.223: INFO: Restart count of pod e2e-tests-container-probe-jxclq/liveness-exec is now 1 (50.666933046s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:14:30.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jxclq" for this suite. Dec 22 11:14:36.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:14:36.645: INFO: namespace: e2e-tests-container-probe-jxclq, resource: bindings, ignored listing per whitelist Dec 22 11:14:36.673: INFO: namespace e2e-tests-container-probe-jxclq deletion completed in 6.394027467s • [SLOW TEST:67.324 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:14:36.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-3d479b75-24ac-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 11:14:37.120: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-8gjtp" to be "success or failure" Dec 22 11:14:37.163: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.235118ms Dec 22 11:14:39.502: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382022849s Dec 22 11:14:41.521: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401108681s Dec 22 11:14:43.655: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.534793335s Dec 22 11:14:45.695: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.575062637s Dec 22 11:14:47.710: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589557363s STEP: Saw pod success Dec 22 11:14:47.710: INFO: Pod "pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:14:47.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 22 11:14:49.197: INFO: Waiting for pod pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005 to disappear Dec 22 11:14:49.211: INFO: Pod pod-projected-secrets-3d4883c4-24ac-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:14:49.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8gjtp" for this suite. Dec 22 11:14:55.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:14:55.533: INFO: namespace: e2e-tests-projected-8gjtp, resource: bindings, ignored listing per whitelist Dec 22 11:14:55.553: INFO: namespace e2e-tests-projected-8gjtp deletion completed in 6.317067296s • [SLOW TEST:18.880 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:14:55.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 22 11:14:56.068: INFO: Waiting up to 5m0s for pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005" in namespace "e2e-tests-var-expansion-f76b6" to be "success or failure" Dec 22 11:14:56.096: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.683543ms Dec 22 11:14:59.188: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.119990683s Dec 22 11:15:01.199: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130402131s Dec 22 11:15:03.216: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147303655s Dec 22 11:15:05.418: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349464643s Dec 22 11:15:07.582: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.513038752s Dec 22 11:15:11.068: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.999743811s STEP: Saw pod success Dec 22 11:15:11.069: INFO: Pod "var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:15:11.184: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005 container dapi-container: STEP: delete the pod Dec 22 11:15:11.698: INFO: Waiting for pod var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005 to disappear Dec 22 11:15:11.878: INFO: Pod var-expansion-48aa3c14-24ac-11ea-b023-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:15:11.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-f76b6" for this suite. Dec 22 11:15:20.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:15:20.197: INFO: namespace: e2e-tests-var-expansion-f76b6, resource: bindings, ignored listing per whitelist Dec 22 11:15:20.759: INFO: namespace e2e-tests-var-expansion-f76b6 deletion completed in 8.838805055s • [SLOW TEST:25.206 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:15:20.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 22 11:15:20.926: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 22 11:15:25.944: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:15:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-n29m2" for this suite. Dec 22 11:15:39.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:15:40.221: INFO: namespace: e2e-tests-replication-controller-n29m2, resource: bindings, ignored listing per whitelist Dec 22 11:15:40.426: INFO: namespace e2e-tests-replication-controller-n29m2 deletion completed in 11.960320643s • [SLOW TEST:19.667 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:15:40.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zwmrf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zwmrf to expose endpoints map[] Dec 22 11:15:40.906: INFO: Get endpoints failed (29.232257ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 22 11:15:42.652: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zwmrf exposes endpoints map[] (1.774927466s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zwmrf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zwmrf to expose endpoints map[pod1:[100]] Dec 22 11:15:47.317: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.649198171s elapsed, will retry) Dec 22 11:15:52.323: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zwmrf exposes endpoints map[pod1:[100]] (9.654751322s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zwmrf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zwmrf to expose endpoints map[pod2:[101] pod1:[100]] Dec 22 11:15:57.666: INFO: Unexpected endpoints: found map[64754865-24ac-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.326705461s elapsed, will retry) Dec 22 11:16:02.382: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zwmrf exposes endpoints map[pod1:[100] pod2:[101]] (10.043643144s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zwmrf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zwmrf to expose endpoints map[pod2:[101]] Dec 22 11:16:03.876: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zwmrf exposes endpoints map[pod2:[101]] (1.484649882s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zwmrf STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zwmrf to expose endpoints map[] Dec 22 11:16:05.470: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zwmrf exposes endpoints map[] (1.257400077s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:16:05.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zwmrf" for this suite. Dec 22 11:16:30.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:16:30.666: INFO: namespace: e2e-tests-services-zwmrf, resource: bindings, ignored listing per whitelist Dec 22 11:16:30.841: INFO: namespace e2e-tests-services-zwmrf deletion completed in 25.143518227s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:50.414 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:16:30.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-81660324-24ac-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 11:16:31.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-xn5gm" to be "success or failure" Dec 22 11:16:31.312: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.529119ms Dec 22 11:16:33.465: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215355825s Dec 22 11:16:35.496: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246694414s Dec 22 11:16:38.359: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.11002289s Dec 22 11:16:40.385: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.135424958s Dec 22 11:16:42.516: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.266708587s STEP: Saw pod success Dec 22 11:16:42.516: INFO: Pod "pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:16:42.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 22 11:16:42.764: INFO: Waiting for pod pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005 to disappear Dec 22 11:16:42.779: INFO: Pod pod-configmaps-81671cbc-24ac-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:16:42.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xn5gm" for this suite. Dec 22 11:16:48.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:16:49.075: INFO: namespace: e2e-tests-configmap-xn5gm, resource: bindings, ignored listing per whitelist Dec 22 11:16:49.255: INFO: namespace e2e-tests-configmap-xn5gm deletion completed in 6.460085826s • [SLOW TEST:18.413 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:16:49.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 22 11:16:49.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:16:49.956: INFO: stderr: "" Dec 22 11:16:49.957: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 22 11:16:49.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:16:50.345: INFO: stderr: "" Dec 22 11:16:50.345: INFO: stdout: "update-demo-nautilus-hqw8s update-demo-nautilus-jhgqr " Dec 22 11:16:50.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqw8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:16:50.522: INFO: stderr: "" Dec 22 11:16:50.522: INFO: stdout: "" Dec 22 11:16:50.522: INFO: update-demo-nautilus-hqw8s is created but not running Dec 22 11:16:55.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:16:55.643: INFO: stderr: "" Dec 22 11:16:55.643: INFO: stdout: "update-demo-nautilus-hqw8s update-demo-nautilus-jhgqr " Dec 22 11:16:55.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqw8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:16:55.745: INFO: stderr: "" Dec 22 11:16:55.745: INFO: stdout: "" Dec 22 11:16:55.745: INFO: update-demo-nautilus-hqw8s is created but not running Dec 22 11:17:00.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:00.912: INFO: stderr: "" Dec 22 11:17:00.912: INFO: stdout: "update-demo-nautilus-hqw8s update-demo-nautilus-jhgqr " Dec 22 11:17:00.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqw8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:01.058: INFO: stderr: "" Dec 22 11:17:01.058: INFO: stdout: "" Dec 22 11:17:01.058: INFO: update-demo-nautilus-hqw8s is created but not running Dec 22 11:17:06.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:06.208: INFO: stderr: "" Dec 22 11:17:06.209: INFO: stdout: "update-demo-nautilus-hqw8s update-demo-nautilus-jhgqr " Dec 22 11:17:06.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqw8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:06.332: INFO: stderr: "" Dec 22 11:17:06.333: INFO: stdout: "true" Dec 22 11:17:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqw8s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:06.518: INFO: stderr: "" Dec 22 11:17:06.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 22 11:17:06.518: INFO: validating pod update-demo-nautilus-hqw8s Dec 22 11:17:06.570: INFO: got data: { "image": "nautilus.jpg" } Dec 22 11:17:06.571: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 22 11:17:06.571: INFO: update-demo-nautilus-hqw8s is verified up and running Dec 22 11:17:06.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:06.726: INFO: stderr: "" Dec 22 11:17:06.726: INFO: stdout: "true" Dec 22 11:17:06.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:06.875: INFO: stderr: "" Dec 22 11:17:06.876: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 22 11:17:06.876: INFO: validating pod update-demo-nautilus-jhgqr Dec 22 11:17:06.889: INFO: got data: { "image": "nautilus.jpg" } Dec 22 11:17:06.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 22 11:17:06.890: INFO: update-demo-nautilus-jhgqr is verified up and running STEP: scaling down the replication controller Dec 22 11:17:06.892: INFO: scanned /root for discovery docs: Dec 22 11:17:06.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:08.617: INFO: stderr: "" Dec 22 11:17:08.617: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 22 11:17:08.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:08.734: INFO: stderr: "" Dec 22 11:17:08.734: INFO: stdout: "update-demo-nautilus-hqw8s update-demo-nautilus-jhgqr " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 22 11:17:13.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:13.929: INFO: stderr: "" Dec 22 11:17:13.929: INFO: stdout: "update-demo-nautilus-jhgqr " Dec 22 11:17:13.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:14.077: INFO: stderr: "" Dec 22 11:17:14.077: INFO: stdout: "true" Dec 22 11:17:14.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:14.188: INFO: stderr: "" Dec 22 11:17:14.188: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 22 11:17:14.188: INFO: validating pod update-demo-nautilus-jhgqr Dec 22 11:17:14.208: INFO: got data: { "image": "nautilus.jpg" } Dec 22 11:17:14.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 22 11:17:14.208: INFO: update-demo-nautilus-jhgqr is verified up and running STEP: scaling up the replication controller Dec 22 11:17:14.212: INFO: scanned /root for discovery docs: Dec 22 11:17:14.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:15.409: INFO: stderr: "" Dec 22 11:17:15.409: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 22 11:17:15.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:15.606: INFO: stderr: "" Dec 22 11:17:15.606: INFO: stdout: "update-demo-nautilus-d2gt8 update-demo-nautilus-jhgqr " Dec 22 11:17:15.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:15.734: INFO: stderr: "" Dec 22 11:17:15.734: INFO: stdout: "" Dec 22 11:17:15.734: INFO: update-demo-nautilus-d2gt8 is created but not running Dec 22 11:17:20.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:20.994: INFO: stderr: "" Dec 22 11:17:20.994: INFO: stdout: "update-demo-nautilus-d2gt8 update-demo-nautilus-jhgqr " Dec 22 11:17:20.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:21.592: INFO: stderr: "" Dec 22 11:17:21.592: INFO: stdout: "" Dec 22 11:17:21.593: INFO: update-demo-nautilus-d2gt8 is created but not running Dec 22 11:17:26.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:26.773: INFO: stderr: "" Dec 22 11:17:26.773: INFO: stdout: "update-demo-nautilus-d2gt8 update-demo-nautilus-jhgqr " Dec 22 11:17:26.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:26.933: INFO: stderr: "" Dec 22 11:17:26.933: INFO: stdout: "true" Dec 22 11:17:26.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:27.044: INFO: stderr: "" Dec 22 11:17:27.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 22 11:17:27.044: INFO: validating pod update-demo-nautilus-d2gt8 Dec 22 11:17:27.056: INFO: got data: { "image": "nautilus.jpg" } Dec 22 11:17:27.056: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 22 11:17:27.056: INFO: update-demo-nautilus-d2gt8 is verified up and running Dec 22 11:17:27.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:27.174: INFO: stderr: "" Dec 22 11:17:27.174: INFO: stdout: "true" Dec 22 11:17:27.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:27.279: INFO: stderr: "" Dec 22 11:17:27.279: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 22 11:17:27.279: INFO: validating pod update-demo-nautilus-jhgqr Dec 22 11:17:27.334: INFO: got data: { "image": "nautilus.jpg" } Dec 22 11:17:27.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 22 11:17:27.334: INFO: update-demo-nautilus-jhgqr is verified up and running STEP: using delete to clean up resources Dec 22 11:17:27.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:27.463: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 22 11:17:27.463: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 22 11:17:27.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-w5jfh' Dec 22 11:17:27.633: INFO: stderr: "No resources found.\n" Dec 22 11:17:27.633: INFO: stdout: "" Dec 22 11:17:27.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-w5jfh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 22 11:17:27.872: INFO: stderr: "" Dec 22 11:17:27.872: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:17:27.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w5jfh" for this suite. Dec 22 11:17:52.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:17:52.127: INFO: namespace: e2e-tests-kubectl-w5jfh, resource: bindings, ignored listing per whitelist Dec 22 11:17:52.317: INFO: namespace e2e-tests-kubectl-w5jfh deletion completed in 24.278218734s • [SLOW TEST:63.062 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:17:52.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Dec 22 11:17:52.721: INFO: Waiting up to 5m0s for pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005" in namespace "e2e-tests-var-expansion-hszgp" to be "success or failure" Dec 22 11:17:52.729: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.675192ms Dec 22 11:17:54.772: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051005722s Dec 22 11:17:56.800: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07858837s Dec 22 11:17:59.208: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48648856s Dec 22 11:18:01.226: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50500927s Dec 22 11:18:03.247: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.52548876s STEP: Saw pod success Dec 22 11:18:03.247: INFO: Pod "var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:18:03.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005 container dapi-container: STEP: delete the pod Dec 22 11:18:03.324: INFO: Waiting for pod var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005 to disappear Dec 22 11:18:03.977: INFO: Pod var-expansion-b1e8a745-24ac-11ea-b023-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:18:03.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hszgp" for this suite. Dec 22 11:18:10.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:18:10.702: INFO: namespace: e2e-tests-var-expansion-hszgp, resource: bindings, ignored listing per whitelist Dec 22 11:18:10.722: INFO: namespace e2e-tests-var-expansion-hszgp deletion completed in 6.572177791s • [SLOW TEST:18.405 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:18:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:18:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 22 11:18:11.184: INFO: stderr: "" Dec 22 11:18:11.184: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:18:11.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dzx6d" for this suite. Dec 22 11:18:17.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:18:17.417: INFO: namespace: e2e-tests-kubectl-dzx6d, resource: bindings, ignored listing per whitelist Dec 22 11:18:17.432: INFO: namespace e2e-tests-kubectl-dzx6d deletion completed in 6.230231582s • [SLOW TEST:6.710 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:18:17.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 22 11:18:18.188: INFO: created pod pod-service-account-defaultsa Dec 22 11:18:18.188: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 22 11:18:18.325: INFO: created pod pod-service-account-mountsa Dec 22 11:18:18.326: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 22 11:18:18.497: INFO: created pod pod-service-account-nomountsa Dec 22 11:18:18.497: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 22 11:18:18.644: INFO: created pod pod-service-account-defaultsa-mountspec Dec 22 11:18:18.644: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 22 11:18:18.690: INFO: created pod pod-service-account-mountsa-mountspec Dec 22 11:18:18.691: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 22 11:18:18.902: INFO: created pod pod-service-account-nomountsa-mountspec Dec 22 11:18:18.902: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 22 11:18:20.227: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 22 11:18:20.228: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 22 11:18:21.516: INFO: created pod pod-service-account-mountsa-nomountspec Dec 22 11:18:21.516: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 22 11:18:22.750: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 22 11:18:22.750: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:18:22.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4vp94" for this suite. Dec 22 11:18:51.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:18:51.695: INFO: namespace: e2e-tests-svcaccounts-4vp94, resource: bindings, ignored listing per whitelist Dec 22 11:18:51.796: INFO: namespace e2e-tests-svcaccounts-4vp94 deletion completed in 27.523851275s • [SLOW TEST:34.364 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:18:51.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Dec 22 11:18:52.021: INFO: Waiting up to 5m0s for pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005" in namespace "e2e-tests-containers-pkt66" to be "success or failure" Dec 22 11:18:52.117: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 95.438598ms Dec 22 11:18:54.392: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370917368s Dec 22 11:18:56.409: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387949226s Dec 22 11:18:58.428: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406700017s Dec 22 11:19:00.465: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443583576s Dec 22 11:19:02.490: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468118386s STEP: Saw pod success Dec 22 11:19:02.490: INFO: Pod "client-containers-d54e12ee-24ac-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:19:02.497: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d54e12ee-24ac-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 11:19:02.845: INFO: Waiting for pod client-containers-d54e12ee-24ac-11ea-b023-0242ac110005 to disappear Dec 22 11:19:02.909: INFO: Pod client-containers-d54e12ee-24ac-11ea-b023-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:19:02.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pkt66" for this suite. Dec 22 11:19:11.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:19:11.263: INFO: namespace: e2e-tests-containers-pkt66, resource: bindings, ignored listing per whitelist Dec 22 11:19:11.348: INFO: namespace e2e-tests-containers-pkt66 deletion completed in 8.34833039s • [SLOW TEST:19.552 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:19:11.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8swms Dec 22 11:19:23.745: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8swms STEP: checking the pod's current state and verifying that restartCount is present Dec 22 11:19:23.757: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:23:24.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8swms" for this suite. Dec 22 11:23:30.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:23:30.704: INFO: namespace: e2e-tests-container-probe-8swms, resource: bindings, ignored listing per whitelist Dec 22 11:23:30.709: INFO: namespace e2e-tests-container-probe-8swms deletion completed in 6.302904353s • [SLOW TEST:259.361 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:23:30.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 22 11:23:30.989: INFO: namespace e2e-tests-kubectl-9kngt Dec 22 11:23:30.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9kngt' Dec 22 11:23:33.135: INFO: stderr: "" Dec 22 11:23:33.135: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 22 11:23:34.186: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:34.186: INFO: Found 0 / 1 Dec 22 11:23:35.239: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:35.239: INFO: Found 0 / 1 Dec 22 11:23:36.150: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:36.150: INFO: Found 0 / 1 Dec 22 11:23:37.162: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:37.162: INFO: Found 0 / 1 Dec 22 11:23:39.322: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:39.323: INFO: Found 0 / 1 Dec 22 11:23:40.152: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:40.152: INFO: Found 0 / 1 Dec 22 11:23:41.155: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:41.156: INFO: Found 0 / 1 Dec 22 11:23:42.164: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:42.164: INFO: Found 1 / 1 Dec 22 11:23:42.164: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 22 11:23:42.172: INFO: Selector matched 1 pods for map[app:redis] Dec 22 11:23:42.172: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 22 11:23:42.172: INFO: wait on redis-master startup in e2e-tests-kubectl-9kngt Dec 22 11:23:42.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v2j29 redis-master --namespace=e2e-tests-kubectl-9kngt' Dec 22 11:23:42.412: INFO: stderr: "" Dec 22 11:23:42.412: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Dec 11:23:41.504 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Dec 11:23:41.504 # Server started, Redis version 3.2.12\n1:M 22 Dec 11:23:41.505 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Dec 11:23:41.505 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 22 11:23:42.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-9kngt' Dec 22 11:23:42.810: INFO: stderr: "" Dec 22 11:23:42.810: INFO: stdout: "service/rm2 exposed\n" Dec 22 11:23:42.818: INFO: Service rm2 in namespace e2e-tests-kubectl-9kngt found. STEP: exposing service Dec 22 11:23:44.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-9kngt' Dec 22 11:23:45.191: INFO: stderr: "" Dec 22 11:23:45.191: INFO: stdout: "service/rm3 exposed\n" Dec 22 11:23:45.201: INFO: Service rm3 in namespace e2e-tests-kubectl-9kngt found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:23:47.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9kngt" for this suite. Dec 22 11:24:11.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:24:11.478: INFO: namespace: e2e-tests-kubectl-9kngt, resource: bindings, ignored listing per whitelist Dec 22 11:24:11.496: INFO: namespace e2e-tests-kubectl-9kngt deletion completed in 24.251565727s • [SLOW TEST:40.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:24:11.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 22 11:24:11.636: INFO: Waiting up to 5m0s for pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-lnqjj" to be "success or failure" Dec 22 11:24:11.646: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.460544ms Dec 22 11:24:14.159: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.522255159s Dec 22 11:24:16.190: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553337197s Dec 22 11:24:18.535: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.898322236s Dec 22 11:24:20.616: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.979305187s Dec 22 11:24:22.639: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.002755399s Dec 22 11:24:24.682: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.045595603s STEP: Saw pod success Dec 22 11:24:24.683: INFO: Pod "downward-api-93d247c4-24ad-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:24:24.703: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-93d247c4-24ad-11ea-b023-0242ac110005 container dapi-container: STEP: delete the pod Dec 22 11:24:25.203: INFO: Waiting for pod downward-api-93d247c4-24ad-11ea-b023-0242ac110005 to disappear Dec 22 11:24:25.260: INFO: Pod downward-api-93d247c4-24ad-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:24:25.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lnqjj" for this suite. Dec 22 11:24:31.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:24:31.382: INFO: namespace: e2e-tests-downward-api-lnqjj, resource: bindings, ignored listing per whitelist Dec 22 11:24:31.548: INFO: namespace e2e-tests-downward-api-lnqjj deletion completed in 6.257826629s • [SLOW TEST:20.052 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:24:31.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-9fcea6f4-24ad-11ea-b023-0242ac110005 STEP: Creating secret with name s-test-opt-upd-9fcea7c8-24ad-11ea-b023-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9fcea6f4-24ad-11ea-b023-0242ac110005 STEP: Updating secret s-test-opt-upd-9fcea7c8-24ad-11ea-b023-0242ac110005 STEP: Creating secret with name s-test-opt-create-9fcea7df-24ad-11ea-b023-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:24:52.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hcds2" for this suite. Dec 22 11:25:16.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:25:16.214: INFO: namespace: e2e-tests-secrets-hcds2, resource: bindings, ignored listing per whitelist Dec 22 11:25:16.313: INFO: namespace e2e-tests-secrets-hcds2 deletion completed in 24.244751283s • [SLOW TEST:44.765 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:25:16.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 22 11:25:17.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-flkm8' Dec 22 11:25:17.471: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 22 11:25:17.471: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Dec 22 11:25:21.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-flkm8' Dec 22 11:25:22.143: INFO: stderr: "" Dec 22 11:25:22.144: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:25:22.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-flkm8" for this suite. Dec 22 11:25:28.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:25:28.777: INFO: namespace: e2e-tests-kubectl-flkm8, resource: bindings, ignored listing per whitelist Dec 22 11:25:28.779: INFO: namespace e2e-tests-kubectl-flkm8 deletion completed in 6.530668728s • [SLOW TEST:12.466 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:25:28.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:25:28.913: INFO: Creating deployment "nginx-deployment" Dec 22 11:25:28.923: INFO: Waiting for observed generation 1 Dec 22 11:25:31.986: INFO: Waiting for all required pods to come up Dec 22 11:25:33.429: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 22 11:26:18.643: INFO: Waiting for deployment "nginx-deployment" to complete Dec 22 11:26:18.923: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 22 11:26:18.940: INFO: Updating deployment nginx-deployment Dec 22 11:26:18.940: INFO: Waiting for observed generation 2 Dec 22 11:26:21.872: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 22 11:26:21.924: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 22 11:26:22.549: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 22 11:26:22.803: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 22 11:26:22.804: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 22 11:26:22.820: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 22 11:26:22.841: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 22 11:26:22.841: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 22 11:26:22.873: INFO: Updating deployment nginx-deployment Dec 22 11:26:22.873: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 22 11:26:25.535: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 22 11:26:27.577: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 22 11:26:30.075: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lfxrs/deployments/nginx-deployment,UID:c1e4d14d-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671063,Generation:3,CreationTimestamp:2019-12-22 11:25:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-22 11:26:20 +0000 UTC 2019-12-22 11:25:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-22 11:26:27 +0000 UTC 2019-12-22 11:26:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 22 11:26:30.516: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lfxrs/replicasets/nginx-deployment-5c98f8fb5,UID:dfb676b6-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671045,Generation:3,CreationTimestamp:2019-12-22 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c1e4d14d-24ad-11ea-a994-fa163e34d433 0xc0021b0057 0xc0021b0058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 11:26:30.517: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 22 11:26:30.517: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lfxrs/replicasets/nginx-deployment-85ddf47c5d,UID:c1ed5c4c-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671078,Generation:3,CreationTimestamp:2019-12-22 11:25:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c1e4d14d-24ad-11ea-a994-fa163e34d433 0xc0021b0217 0xc0021b0218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 22 11:26:32.271: INFO: Pod "nginx-deployment-5c98f8fb5-4zrqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4zrqt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-4zrqt,UID:e0277c92-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671023,Generation:0,CreationTimestamp:2019-12-22 11:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e41d7 0xc0017e41d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4240} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.272: INFO: Pod "nginx-deployment-5c98f8fb5-7p968" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7p968,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-7p968,UID:e663724e-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671099,Generation:0,CreationTimestamp:2019-12-22 11:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4317 0xc0017e4318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e43a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.273: INFO: Pod "nginx-deployment-5c98f8fb5-9pj76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9pj76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-9pj76,UID:dfca8ca4-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671031,Generation:0,CreationTimestamp:2019-12-22 11:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4447 0xc0017e4448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-22 11:26:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.273: INFO: Pod "nginx-deployment-5c98f8fb5-9vbp5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9vbp5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-9vbp5,UID:e54e6ae5-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671083,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4617 0xc0017e4618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4700} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.275: INFO: Pod "nginx-deployment-5c98f8fb5-b6vc7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b6vc7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-b6vc7,UID:dfcab444-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671035,Generation:0,CreationTimestamp:2019-12-22 11:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4797 0xc0017e4798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-22 11:26:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.275: INFO: Pod "nginx-deployment-5c98f8fb5-f9kvz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f9kvz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-f9kvz,UID:e5f551b1-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671090,Generation:0,CreationTimestamp:2019-12-22 11:26:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4957 0xc0017e4958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.276: INFO: Pod "nginx-deployment-5c98f8fb5-gg6xb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gg6xb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-gg6xb,UID:e6a78dca-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671102,Generation:0,CreationTimestamp:2019-12-22 11:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4ac7 0xc0017e4ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.276: INFO: Pod "nginx-deployment-5c98f8fb5-js5f6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-js5f6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-js5f6,UID:e5f5186a-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671091,Generation:0,CreationTimestamp:2019-12-22 11:26:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4c40 0xc0017e4c41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.277: INFO: Pod "nginx-deployment-5c98f8fb5-mfgrf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mfgrf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-mfgrf,UID:dfc7b60a-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671013,Generation:0,CreationTimestamp:2019-12-22 11:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4d47 0xc0017e4d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e4e60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e4e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-22 11:26:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.278: INFO: Pod "nginx-deployment-5c98f8fb5-rk28v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rk28v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-rk28v,UID:e6638b3c-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671101,Generation:0,CreationTimestamp:2019-12-22 11:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e4f47 0xc0017e4f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e50f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.278: INFO: Pod "nginx-deployment-5c98f8fb5-tlgz2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tlgz2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-tlgz2,UID:e00ff84a-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671077,Generation:0,CreationTimestamp:2019-12-22 11:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e5187 0xc0017e5188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e51f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-22 11:26:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.278: INFO: Pod "nginx-deployment-5c98f8fb5-tnt5w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tnt5w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-tnt5w,UID:e662c8ff-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671098,Generation:0,CreationTimestamp:2019-12-22 11:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e5417 0xc0017e5418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.279: INFO: Pod "nginx-deployment-5c98f8fb5-vhlqx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vhlqx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-5c98f8fb5-vhlqx,UID:e66364de-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671100,Generation:0,CreationTimestamp:2019-12-22 11:26:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dfb676b6-24ad-11ea-a994-fa163e34d433 0xc0017e55f7 0xc0017e55f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.280: INFO: Pod "nginx-deployment-85ddf47c5d-2bmgl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2bmgl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-2bmgl,UID:e542276c-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671066,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e56f7 0xc0017e56f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.281: INFO: Pod "nginx-deployment-85ddf47c5d-5zssx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5zssx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-5zssx,UID:e4df9c0f-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671055,Generation:0,CreationTimestamp:2019-12-22 11:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e57f7 0xc0017e57f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.281: INFO: Pod "nginx-deployment-85ddf47c5d-6fjfq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6fjfq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-6fjfq,UID:c21055e2-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670961,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e58f7 0xc0017e58f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9649eee1c19620cdd7ac0d1506e18ceccd964287f131c4d1d1189a738d1c7ab2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.282: INFO: Pod "nginx-deployment-85ddf47c5d-797kk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-797kk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-797kk,UID:e5421c14-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671076,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5a47 0xc0017e5a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.282: INFO: Pod "nginx-deployment-85ddf47c5d-d4wcw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d4wcw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-d4wcw,UID:e55755c9-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671086,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5b47 0xc0017e5b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.283: INFO: Pod "nginx-deployment-85ddf47c5d-fhcx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fhcx8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-fhcx8,UID:e5567fec-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671087,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5c47 0xc0017e5c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.283: INFO: Pod "nginx-deployment-85ddf47c5d-gfm7b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gfm7b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-gfm7b,UID:c2106cf9-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670949,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5d47 0xc0017e5d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8714be5abb74e040ac99ce4c1afa2bdb92927a285bcebd1ba600f55176e63a66}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.284: INFO: Pod "nginx-deployment-85ddf47c5d-k8xjb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k8xjb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-k8xjb,UID:c21bb32b-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670967,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5e97 0xc0017e5e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017e5f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017e5f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-22 11:25:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://caf0debc3b0cd6b838df564be3145b842beae4bbb2cc69195f68867239e4f221}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.284: INFO: Pod "nginx-deployment-85ddf47c5d-kp74d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kp74d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-kp74d,UID:e5426412-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671075,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0017e5fe7 0xc0017e5fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b20c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b20e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.285: INFO: Pod "nginx-deployment-85ddf47c5d-ljjf7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljjf7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-ljjf7,UID:e4d8c2dd-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671053,Generation:0,CreationTimestamp:2019-12-22 11:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b21b7 0xc0021b21b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b22e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b2300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.285: INFO: Pod "nginx-deployment-85ddf47c5d-ll4fv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ll4fv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-ll4fv,UID:c1f718f5-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670971,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b23b7 0xc0021b23b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b2520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b25a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://98088b8ce68241ee2e577bee6c06a78a9a22ba515d48a66a484b3a80af0c57f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.285: INFO: Pod "nginx-deployment-85ddf47c5d-q28zr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q28zr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-q28zr,UID:e5420945-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671074,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b2727 0xc0021b2728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b2850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b28b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.286: INFO: Pod "nginx-deployment-85ddf47c5d-r9h29" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r9h29,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-r9h29,UID:c21bd4e4-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671047,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b2997 0xc0021b2998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b2a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b2ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-22 11:25:31 +0000 UTC,ContainerStatuses:[{nginx {nil nil ContainerStateTerminated{ExitCode:137,Signal:0,Reason:Error,Message:,StartedAt:2019-12-22 11:26:12 +0000 UTC,FinishedAt:2019-12-22 11:26:16 +0000 UTC,ContainerID:docker://878a1340a3ed86bec7ff0d3b7d3fe213d067402dc1dce08fd9ce014c07e0f44b,}} {nil nil nil} false 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://878a1340a3ed86bec7ff0d3b7d3fe213d067402dc1dce08fd9ce014c07e0f44b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.286: INFO: Pod "nginx-deployment-85ddf47c5d-rkw7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rkw7f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-rkw7f,UID:e5575148-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671084,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b2cc7 0xc0021b2cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b2d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b2dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.287: INFO: Pod "nginx-deployment-85ddf47c5d-rrlr8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rrlr8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-rrlr8,UID:e556e696-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671085,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b2e87 0xc0021b2e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b2f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b2f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.287: INFO: Pod "nginx-deployment-85ddf47c5d-s4lbw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s4lbw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-s4lbw,UID:c2102dc7-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670946,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b3157 0xc0021b3158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b3220} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b3240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4cc70edd6add735841d9516c80a4660fcbd014ae285259990fc8b91121e93e9d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.288: INFO: Pod "nginx-deployment-85ddf47c5d-tqss6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tqss6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-tqss6,UID:c1fa6cb5-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670957,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b3307 0xc0021b3308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b33e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b3400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6590c49ab45a4aa030991e969ca991c63298f3009091dff529ea4cd843bc85e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.289: INFO: Pod "nginx-deployment-85ddf47c5d-xn28r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xn28r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-xn28r,UID:e556599b-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671079,Generation:0,CreationTimestamp:2019-12-22 11:26:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b34c7 0xc0021b34c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b3530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b3550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.290: INFO: Pod "nginx-deployment-85ddf47c5d-xqnd4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xqnd4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-xqnd4,UID:e4de8bf4-24ad-11ea-a994-fa163e34d433,ResourceVersion:15671059,Generation:0,CreationTimestamp:2019-12-22 11:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b35c7 0xc0021b35c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b3630} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b3650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:28 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 22 11:26:32.291: INFO: Pod "nginx-deployment-85ddf47c5d-z2hp4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z2hp4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-lfxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lfxrs/pods/nginx-deployment-85ddf47c5d-z2hp4,UID:c210078c-24ad-11ea-a994-fa163e34d433,ResourceVersion:15670942,Generation:0,CreationTimestamp:2019-12-22 11:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c1ed5c4c-24ad-11ea-a994-fa163e34d433 0xc0021b3807 0xc0021b3808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghngd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghngd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghngd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021b3870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021b3890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:26:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:25:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-22 11:25:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-22 11:26:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bcfbb900bf12b73bdf841484e1bf1ba6dcca2420a8babb5c3a3bd5673ca82356}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:26:32.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-lfxrs" for this suite. Dec 22 11:27:32.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:27:32.775: INFO: namespace: e2e-tests-deployment-lfxrs, resource: bindings, ignored listing per whitelist Dec 22 11:27:32.986: INFO: namespace e2e-tests-deployment-lfxrs deletion completed in 1m0.162124488s • [SLOW TEST:124.207 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:27:32.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jjqrt [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 22 11:27:35.945: INFO: Found 0 stateful pods, waiting for 3 Dec 22 11:27:45.969: INFO: Found 1 stateful pods, waiting for 3 Dec 22 11:27:56.152: INFO: Found 1 stateful pods, waiting for 3 Dec 22 11:28:05.963: INFO: Found 2 stateful pods, waiting for 3 Dec 22 11:28:15.963: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:28:15.963: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:28:15.963: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 22 11:28:25.962: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:28:25.962: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:28:25.962: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:28:26.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jjqrt ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:28:26.759: INFO: stderr: "" Dec 22 11:28:26.759: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:28:26.759: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 22 11:28:26.890: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 22 11:28:37.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jjqrt ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:28:37.695: INFO: stderr: "" Dec 22 11:28:37.695: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 11:28:37.695: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 11:28:47.783: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:28:47.783: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:28:47.783: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:28:47.783: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:28:59.001: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:28:59.001: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:28:59.001: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:29:07.803: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:29:07.803: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:29:07.803: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:29:17.819: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:29:17.819: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:29:27.802: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:29:27.803: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 22 11:29:38.405: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update STEP: Rolling back to a previous revision Dec 22 11:29:47.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jjqrt ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:29:48.651: INFO: stderr: "" Dec 22 11:29:48.651: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:29:48.651: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 11:29:58.769: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 22 11:30:08.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jjqrt ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:30:09.328: INFO: stderr: "" Dec 22 11:30:09.328: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 11:30:09.328: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 11:30:20.416: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:30:20.417: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:20.417: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:20.417: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:30.461: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:30:30.462: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:30.462: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:40.451: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:30:40.451: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:30:50.465: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:30:50.466: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:31:00.446: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update Dec 22 11:31:00.446: INFO: Waiting for Pod e2e-tests-statefulset-jjqrt/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 22 11:31:10.704: INFO: Waiting for StatefulSet e2e-tests-statefulset-jjqrt/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 22 11:31:20.530: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jjqrt Dec 22 11:31:20.538: INFO: Scaling statefulset ss2 to 0 Dec 22 11:31:50.627: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 11:31:50.634: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:31:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jjqrt" for this suite. Dec 22 11:31:58.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:31:58.784: INFO: namespace: e2e-tests-statefulset-jjqrt, resource: bindings, ignored listing per whitelist Dec 22 11:31:58.900: INFO: namespace e2e-tests-statefulset-jjqrt deletion completed in 8.223257741s • [SLOW TEST:265.913 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:31:58.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:32:05.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9dbgj" for this suite. Dec 22 11:32:11.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:32:11.783: INFO: namespace: e2e-tests-namespaces-9dbgj, resource: bindings, ignored listing per whitelist Dec 22 11:32:11.921: INFO: namespace e2e-tests-namespaces-9dbgj deletion completed in 6.315396394s STEP: Destroying namespace "e2e-tests-nsdeletetest-v97hh" for this suite. Dec 22 11:32:11.928: INFO: Namespace e2e-tests-nsdeletetest-v97hh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-jnm8v" for this suite. Dec 22 11:32:18.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:32:18.225: INFO: namespace: e2e-tests-nsdeletetest-jnm8v, resource: bindings, ignored listing per whitelist Dec 22 11:32:18.276: INFO: namespace e2e-tests-nsdeletetest-jnm8v deletion completed in 6.347983693s • [SLOW TEST:19.376 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:32:18.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9dtr9 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 22 11:32:18.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 22 11:33:01.479: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9dtr9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:33:01.479: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:33:03.015: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:33:03.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9dtr9" for this suite. Dec 22 11:33:19.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:33:19.314: INFO: namespace: e2e-tests-pod-network-test-9dtr9, resource: bindings, ignored listing per whitelist Dec 22 11:33:19.344: INFO: namespace e2e-tests-pod-network-test-9dtr9 deletion completed in 16.313202509s • [SLOW TEST:61.067 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:33:19.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-da6af121-24ae-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 11:33:19.644: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-jn2rv" to be "success or failure" Dec 22 11:33:19.662: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.102189ms Dec 22 11:33:21.684: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040550365s Dec 22 11:33:23.704: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060065019s Dec 22 11:33:25.991: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347163155s Dec 22 11:33:28.019: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374661301s Dec 22 11:33:30.047: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.402727718s Dec 22 11:33:32.064: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.420375294s STEP: Saw pod success Dec 22 11:33:32.064: INFO: Pod "pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:33:32.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Dec 22 11:33:32.353: INFO: Waiting for pod pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005 to disappear Dec 22 11:33:32.383: INFO: Pod pod-projected-secrets-da6b9a75-24ae-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:33:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jn2rv" for this suite. Dec 22 11:33:38.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:33:38.504: INFO: namespace: e2e-tests-projected-jn2rv, resource: bindings, ignored listing per whitelist Dec 22 11:33:38.741: INFO: namespace e2e-tests-projected-jn2rv deletion completed in 6.347815237s • [SLOW TEST:19.397 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:33:38.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:33:49.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-f9cj5" for this suite. Dec 22 11:33:55.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:33:55.647: INFO: namespace: e2e-tests-emptydir-wrapper-f9cj5, resource: bindings, ignored listing per whitelist Dec 22 11:33:55.837: INFO: namespace e2e-tests-emptydir-wrapper-f9cj5 deletion completed in 6.621215109s • [SLOW TEST:17.097 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:33:55.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f0313504-24ae-11ea-b023-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f0313504-24ae-11ea-b023-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:34:10.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4vpb7" for this suite. Dec 22 11:34:34.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:34:34.677: INFO: namespace: e2e-tests-projected-4vpb7, resource: bindings, ignored listing per whitelist Dec 22 11:34:34.734: INFO: namespace e2e-tests-projected-4vpb7 deletion completed in 24.231074341s • [SLOW TEST:38.896 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:34:34.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 11:34:34.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-ds6n7" to be "success or failure" Dec 22 11:34:34.933: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.624073ms Dec 22 11:34:37.289: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363423811s Dec 22 11:34:39.313: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387256013s Dec 22 11:34:41.720: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.794409214s Dec 22 11:34:43.738: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812402023s Dec 22 11:34:45.771: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.84547369s STEP: Saw pod success Dec 22 11:34:45.771: INFO: Pod "downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:34:45.882: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 11:34:46.098: INFO: Waiting for pod downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005 to disappear Dec 22 11:34:46.104: INFO: Pod downwardapi-volume-0754a61e-24af-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:34:46.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ds6n7" for this suite. Dec 22 11:34:52.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:34:52.266: INFO: namespace: e2e-tests-downward-api-ds6n7, resource: bindings, ignored listing per whitelist Dec 22 11:34:52.340: INFO: namespace e2e-tests-downward-api-ds6n7 deletion completed in 6.227804419s • [SLOW TEST:17.606 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:34:52.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 22 11:34:52.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-skxsb' Dec 22 11:34:54.724: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 22 11:34:54.725: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 22 11:34:54.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-skxsb' Dec 22 11:34:54.973: INFO: stderr: "" Dec 22 11:34:54.974: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:34:54.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-skxsb" for this suite. Dec 22 11:35:19.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:35:19.174: INFO: namespace: e2e-tests-kubectl-skxsb, resource: bindings, ignored listing per whitelist Dec 22 11:35:19.354: INFO: namespace e2e-tests-kubectl-skxsb deletion completed in 24.363890592s • [SLOW TEST:27.013 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:35:19.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 22 11:35:19.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9czfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-9czfb/configmaps/e2e-watch-test-watch-closed,UID:21f66d64-24af-11ea-a994-fa163e34d433,ResourceVersion:15672444,Generation:0,CreationTimestamp:2019-12-22 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 22 11:35:19.615: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9czfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-9czfb/configmaps/e2e-watch-test-watch-closed,UID:21f66d64-24af-11ea-a994-fa163e34d433,ResourceVersion:15672446,Generation:0,CreationTimestamp:2019-12-22 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 22 11:35:19.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9czfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-9czfb/configmaps/e2e-watch-test-watch-closed,UID:21f66d64-24af-11ea-a994-fa163e34d433,ResourceVersion:15672447,Generation:0,CreationTimestamp:2019-12-22 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 22 11:35:19.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9czfb,SelfLink:/api/v1/namespaces/e2e-tests-watch-9czfb/configmaps/e2e-watch-test-watch-closed,UID:21f66d64-24af-11ea-a994-fa163e34d433,ResourceVersion:15672448,Generation:0,CreationTimestamp:2019-12-22 11:35:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:35:19.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9czfb" for this suite. Dec 22 11:35:25.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:35:25.824: INFO: namespace: e2e-tests-watch-9czfb, resource: bindings, ignored listing per whitelist Dec 22 11:35:25.925: INFO: namespace e2e-tests-watch-9czfb deletion completed in 6.206273511s • [SLOW TEST:6.571 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:35:25.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 11:35:26.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-rhghp" to be "success or failure" Dec 22 11:35:26.294: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.692689ms Dec 22 11:35:28.420: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161162172s Dec 22 11:35:30.440: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181275977s Dec 22 11:35:32.883: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62384856s Dec 22 11:35:34.910: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651368597s Dec 22 11:35:36.928: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668625569s Dec 22 11:35:38.946: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.686862919s STEP: Saw pod success Dec 22 11:35:38.946: INFO: Pod "downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:35:38.951: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 11:35:39.351: INFO: Waiting for pod downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005 to disappear Dec 22 11:35:39.455: INFO: Pod downwardapi-volume-25eda744-24af-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:35:39.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rhghp" for this suite. Dec 22 11:35:45.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:35:45.606: INFO: namespace: e2e-tests-downward-api-rhghp, resource: bindings, ignored listing per whitelist Dec 22 11:35:45.841: INFO: namespace e2e-tests-downward-api-rhghp deletion completed in 6.370587162s • [SLOW TEST:19.915 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:35:45.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tbm8x STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 22 11:35:46.054: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 22 11:36:18.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tbm8x PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 22 11:36:18.192: INFO: >>> kubeConfig: /root/.kube/config Dec 22 11:36:18.745: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:36:18.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-tbm8x" for this suite. Dec 22 11:36:42.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:36:42.949: INFO: namespace: e2e-tests-pod-network-test-tbm8x, resource: bindings, ignored listing per whitelist Dec 22 11:36:42.982: INFO: namespace e2e-tests-pod-network-test-tbm8x deletion completed in 24.218916597s • [SLOW TEST:57.139 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:36:42.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 22 11:36:43.269: INFO: Waiting up to 5m0s for pod "pod-53d42954-24af-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-zz5dz" to be "success or failure" Dec 22 11:36:43.277: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.079206ms Dec 22 11:36:45.318: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048806533s Dec 22 11:36:47.352: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082665618s Dec 22 11:36:49.830: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56019479s Dec 22 11:36:51.847: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577822754s Dec 22 11:36:53.887: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.617410186s STEP: Saw pod success Dec 22 11:36:53.887: INFO: Pod "pod-53d42954-24af-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:36:53.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-53d42954-24af-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 11:36:54.170: INFO: Waiting for pod pod-53d42954-24af-11ea-b023-0242ac110005 to disappear Dec 22 11:36:54.209: INFO: Pod pod-53d42954-24af-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:36:54.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zz5dz" for this suite. Dec 22 11:37:00.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:37:00.650: INFO: namespace: e2e-tests-emptydir-zz5dz, resource: bindings, ignored listing per whitelist Dec 22 11:37:00.666: INFO: namespace e2e-tests-emptydir-zz5dz deletion completed in 6.312665408s • [SLOW TEST:17.684 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:37:00.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 22 11:37:01.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9zmb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-9zmb5/configmaps/e2e-watch-test-resource-version,UID:5e5424a8-24af-11ea-a994-fa163e34d433,ResourceVersion:15672680,Generation:0,CreationTimestamp:2019-12-22 11:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 22 11:37:01.022: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9zmb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-9zmb5/configmaps/e2e-watch-test-resource-version,UID:5e5424a8-24af-11ea-a994-fa163e34d433,ResourceVersion:15672681,Generation:0,CreationTimestamp:2019-12-22 11:37:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:37:01.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9zmb5" for this suite. Dec 22 11:37:07.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:37:07.125: INFO: namespace: e2e-tests-watch-9zmb5, resource: bindings, ignored listing per whitelist Dec 22 11:37:07.228: INFO: namespace e2e-tests-watch-9zmb5 deletion completed in 6.202031305s • [SLOW TEST:6.562 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:37:07.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-622eddee-24af-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 11:37:07.513: INFO: Waiting up to 5m0s for pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-8xp5s" to be "success or failure" Dec 22 11:37:07.538: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.84647ms Dec 22 11:37:10.043: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.52943624s Dec 22 11:37:12.062: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548210942s Dec 22 11:37:14.848: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.335088386s Dec 22 11:37:16.870: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.35654481s Dec 22 11:37:18.887: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.373537201s STEP: Saw pod success Dec 22 11:37:18.887: INFO: Pod "pod-secrets-62338041-24af-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:37:18.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-62338041-24af-11ea-b023-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 22 11:37:19.305: INFO: Waiting for pod pod-secrets-62338041-24af-11ea-b023-0242ac110005 to disappear Dec 22 11:37:19.518: INFO: Pod pod-secrets-62338041-24af-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:37:19.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8xp5s" for this suite. Dec 22 11:37:26.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:37:26.469: INFO: namespace: e2e-tests-secrets-8xp5s, resource: bindings, ignored listing per whitelist Dec 22 11:37:26.541: INFO: namespace e2e-tests-secrets-8xp5s deletion completed in 6.98773673s • [SLOW TEST:19.313 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:37:26.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6546z [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6546z STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6546z Dec 22 11:37:26.811: INFO: Found 0 stateful pods, waiting for 1 Dec 22 11:37:36.889: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 22 11:37:36.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:37:37.393: INFO: stderr: "" Dec 22 11:37:37.393: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:37:37.393: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 11:37:37.414: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 22 11:37:47.476: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 22 11:37:47.476: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 11:37:47.538: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999684s Dec 22 11:37:48.583: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.96864524s Dec 22 11:37:49.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.922834478s Dec 22 11:37:50.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.897160476s Dec 22 11:37:51.634: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.88638428s Dec 22 11:37:52.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.871879174s Dec 22 11:37:53.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.747825721s Dec 22 11:37:54.798: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.724144052s Dec 22 11:37:55.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.708187806s Dec 22 11:37:56.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 691.341107ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6546z Dec 22 11:37:57.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:37:58.703: INFO: stderr: "" Dec 22 11:37:58.704: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 11:37:58.704: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 11:37:58.742: INFO: Found 1 stateful pods, waiting for 3 Dec 22 11:38:08.780: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:08.780: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:08.780: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 22 11:38:18.774: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:18.774: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:18.774: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 22 11:38:28.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:28.754: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 22 11:38:28.754: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 22 11:38:28.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:38:29.276: INFO: stderr: "" Dec 22 11:38:29.276: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:38:29.276: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 11:38:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:38:29.830: INFO: stderr: "" Dec 22 11:38:29.831: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:38:29.831: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 11:38:29.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 22 11:38:30.327: INFO: stderr: "" Dec 22 11:38:30.328: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 22 11:38:30.328: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 22 11:38:30.328: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 11:38:30.355: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 22 11:38:40.379: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 22 11:38:40.379: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 22 11:38:40.379: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 22 11:38:40.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999168s Dec 22 11:38:41.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97722709s Dec 22 11:38:42.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968553896s Dec 22 11:38:43.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.927175456s Dec 22 11:38:44.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.907827024s Dec 22 11:38:45.696: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.864704243s Dec 22 11:38:46.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.79944909s Dec 22 11:38:47.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.770606884s Dec 22 11:38:48.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.690510497s Dec 22 11:38:50.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 531.94378ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6546z Dec 22 11:38:51.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:38:51.776: INFO: stderr: "" Dec 22 11:38:51.776: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 11:38:51.776: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 11:38:51.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:38:52.348: INFO: stderr: "" Dec 22 11:38:52.349: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 22 11:38:52.349: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 22 11:38:52.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:38:52.959: INFO: rc: 126 Dec 22 11:38:52.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown command terminated with exit code 126 [] 0xc001905cb0 exit status 126 true [0xc000e6a758 0xc000e6a770 0xc000e6a788] [0xc000e6a758 0xc000e6a770 0xc000e6a788] [0xc000e6a768 0xc000e6a780] [0x935700 0x935700] 0xc00140f320 }: Command stdout: cannot exec in a stopped state: unknown stderr: command terminated with exit code 126 error: exit status 126 Dec 22 11:39:02.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:03.103: INFO: rc: 1 Dec 22 11:39:03.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002409fb0 exit status 1 true [0xc001d88210 0xc001d88228 0xc001d88240] [0xc001d88210 0xc001d88228 0xc001d88240] [0xc001d88220 0xc001d88238] [0x935700 0x935700] 0xc0009a5860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:39:13.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:13.894: INFO: rc: 1 Dec 22 11:39:13.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145bc20 exit status 1 true [0xc000f047b0 0xc000f047c8 0xc000f047e0] [0xc000f047b0 0xc000f047c8 0xc000f047e0] [0xc000f047c0 0xc000f047d8] [0x935700 0x935700] 0xc0018e38c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:39:23.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:24.134: INFO: rc: 1 Dec 22 11:39:24.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000fb8cf0 exit status 1 true [0xc001dda7b8 0xc001dda7d0 0xc001dda7e8] [0xc001dda7b8 0xc001dda7d0 0xc001dda7e8] [0xc001dda7c8 0xc001dda7e0] [0x935700 0x935700] 0xc0010111a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:39:34.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:34.257: INFO: rc: 1 Dec 22 11:39:34.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024081e0 exit status 1 true [0xc0000e80e8 0xc0000e82c0 0xc000420090] [0xc0000e80e8 0xc0000e82c0 0xc000420090] [0xc0000e8240 0xc000420080] [0x935700 0x935700] 0xc0010103c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:39:44.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:45.135: INFO: rc: 1 Dec 22 11:39:45.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b10180 exit status 1 true [0xc001d88000 0xc001d88020 0xc001d88038] [0xc001d88000 0xc001d88020 0xc001d88038] [0xc001d88018 0xc001d88030] [0x935700 0x935700] 0xc0017464e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:39:55.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:39:55.279: INFO: rc: 1 Dec 22 11:39:55.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014a6120 exit status 1 true [0xc001dda008 0xc001dda060 0xc001dda0c0] [0xc001dda008 0xc001dda060 0xc001dda0c0] [0xc001dda048 0xc001dda088] [0x935700 0x935700] 0xc0019029c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:05.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:05.438: INFO: rc: 1 Dec 22 11:40:05.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002408330 exit status 1 true [0xc0004200c0 0xc000420110 0xc0004201f0] [0xc0004200c0 0xc000420110 0xc0004201f0] [0xc0004200f0 0xc0004201c8] [0x935700 0x935700] 0xc001010780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:15.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:15.581: INFO: rc: 1 Dec 22 11:40:15.581: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa120 exit status 1 true [0xc001828000 0xc001828018 0xc001828030] [0xc001828000 0xc001828018 0xc001828030] [0xc001828010 0xc001828028] [0x935700 0x935700] 0xc001aa6a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:25.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:25.729: INFO: rc: 1 Dec 22 11:40:25.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa270 exit status 1 true [0xc001828038 0xc001828050 0xc001828068] [0xc001828038 0xc001828050 0xc001828068] [0xc001828048 0xc001828060] [0x935700 0x935700] 0xc001358720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:35.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:35.897: INFO: rc: 1 Dec 22 11:40:35.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa390 exit status 1 true [0xc001828070 0xc001828088 0xc0018280a0] [0xc001828070 0xc001828088 0xc0018280a0] [0xc001828080 0xc001828098] [0x935700 0x935700] 0xc0013594a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:45.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:46.022: INFO: rc: 1 Dec 22 11:40:46.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa4b0 exit status 1 true [0xc0018280a8 0xc0018280c0 0xc0018280d8] [0xc0018280a8 0xc0018280c0 0xc0018280d8] [0xc0018280b8 0xc0018280d0] [0x935700 0x935700] 0xc001ece960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:40:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:40:56.295: INFO: rc: 1 Dec 22 11:40:56.296: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014a6270 exit status 1 true [0xc001dda0d0 0xc001dda130 0xc001dda178] [0xc001dda0d0 0xc001dda130 0xc001dda178] [0xc001dda118 0xc001dda168] [0x935700 0x935700] 0xc001903800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:06.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:06.426: INFO: rc: 1 Dec 22 11:41:06.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014a63c0 exit status 1 true [0xc001dda1a8 0xc001dda1f0 0xc001dda238] [0xc001dda1a8 0xc001dda1f0 0xc001dda238] [0xc001dda1e0 0xc001dda218] [0x935700 0x935700] 0xc0018bd140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:16.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:16.651: INFO: rc: 1 Dec 22 11:41:16.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024084b0 exit status 1 true [0xc0004201f8 0xc000420298 0xc000420318] [0xc0004201f8 0xc000420298 0xc000420318] [0xc000420288 0xc0004202e0] [0x935700 0x935700] 0xc001010ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:26.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:26.781: INFO: rc: 1 Dec 22 11:41:26.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b106c0 exit status 1 true [0xc001d88040 0xc001d88058 0xc001d88070] [0xc001d88040 0xc001d88058 0xc001d88070] [0xc001d88050 0xc001d88068] [0x935700 0x935700] 0xc001746e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:36.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:36.926: INFO: rc: 1 Dec 22 11:41:36.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002408210 exit status 1 true [0xc0000e8210 0xc00000e010 0xc001dda048] [0xc0000e8210 0xc00000e010 0xc001dda048] [0xc0000e82c0 0xc001dda038] [0x935700 0x935700] 0xc0013591a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:46.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:47.043: INFO: rc: 1 Dec 22 11:41:47.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014a6150 exit status 1 true [0xc001828000 0xc001828018 0xc001828030] [0xc001828000 0xc001828018 0xc001828030] [0xc001828010 0xc001828028] [0x935700 0x935700] 0xc001aa6a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:41:57.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:41:57.155: INFO: rc: 1 Dec 22 11:41:57.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b101b0 exit status 1 true [0xc000420000 0xc0004200c0 0xc000420110] [0xc000420000 0xc0004200c0 0xc000420110] [0xc000420090 0xc0004200f0] [0x935700 0x935700] 0xc0019029c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:07.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:07.286: INFO: rc: 1 Dec 22 11:42:07.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa150 exit status 1 true [0xc001d88000 0xc001d88020 0xc001d88038] [0xc001d88000 0xc001d88020 0xc001d88038] [0xc001d88018 0xc001d88030] [0x935700 0x935700] 0xc0010103c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:17.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:17.477: INFO: rc: 1 Dec 22 11:42:17.478: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa2d0 exit status 1 true [0xc001d88040 0xc001d88058 0xc001d88070] [0xc001d88040 0xc001d88058 0xc001d88070] [0xc001d88050 0xc001d88068] [0x935700 0x935700] 0xc001010780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:27.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:27.672: INFO: rc: 1 Dec 22 11:42:27.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa450 exit status 1 true [0xc001d88078 0xc001d88090 0xc001d880a8] [0xc001d88078 0xc001d88090 0xc001d880a8] [0xc001d88088 0xc001d880a0] [0x935700 0x935700] 0xc001010ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:37.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:37.841: INFO: rc: 1 Dec 22 11:42:37.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b10720 exit status 1 true [0xc000420118 0xc0004201f8 0xc000420298] [0xc000420118 0xc0004201f8 0xc000420298] [0xc0004201f0 0xc000420288] [0x935700 0x935700] 0xc001903800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:47.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:48.042: INFO: rc: 1 Dec 22 11:42:48.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b108d0 exit status 1 true [0xc0004202d0 0xc000420348 0xc000420388] [0xc0004202d0 0xc000420348 0xc000420388] [0xc000420318 0xc000420380] [0x935700 0x935700] 0xc001ecea20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:42:58.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:42:58.186: INFO: rc: 1 Dec 22 11:42:58.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa600 exit status 1 true [0xc001d880b0 0xc001d880c8 0xc001d880e0] [0xc001d880b0 0xc001d880c8 0xc001d880e0] [0xc001d880c0 0xc001d880d8] [0x935700 0x935700] 0xc0010111a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:08.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:08.324: INFO: rc: 1 Dec 22 11:43:08.325: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b10a20 exit status 1 true [0xc000420398 0xc000420420 0xc000420480] [0xc000420398 0xc000420420 0xc000420480] [0xc0004203f0 0xc000420478] [0x935700 0x935700] 0xc0017460c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:18.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:18.564: INFO: rc: 1 Dec 22 11:43:18.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b10ea0 exit status 1 true [0xc000420498 0xc0004204e0 0xc000420590] [0xc000420498 0xc0004204e0 0xc000420590] [0xc0004204b8 0xc000420578] [0x935700 0x935700] 0xc001746660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:28.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:28.732: INFO: rc: 1 Dec 22 11:43:28.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014a6300 exit status 1 true [0xc001828038 0xc001828050 0xc001828068] [0xc001828038 0xc001828050 0xc001828068] [0xc001828048 0xc001828060] [0x935700 0x935700] 0xc0018bd0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:38.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:38.824: INFO: rc: 1 Dec 22 11:43:38.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008fa120 exit status 1 true [0xc0000e80e8 0xc0000e82c0 0xc001d88018] [0xc0000e80e8 0xc0000e82c0 0xc001d88018] [0xc0000e8240 0xc001d88010] [0x935700 0x935700] 0xc0019029c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:48.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:48.991: INFO: rc: 1 Dec 22 11:43:48.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b10180 exit status 1 true [0xc000420000 0xc0004200c0 0xc000420110] [0xc000420000 0xc0004200c0 0xc000420110] [0xc000420090 0xc0004200f0] [0x935700 0x935700] 0xc001aa6a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 22 11:43:58.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6546z ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 22 11:43:59.106: INFO: rc: 1 Dec 22 11:43:59.107: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Dec 22 11:43:59.107: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 22 11:43:59.131: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6546z Dec 22 11:43:59.135: INFO: Scaling statefulset ss to 0 Dec 22 11:43:59.146: INFO: Waiting for statefulset status.replicas updated to 0 Dec 22 11:43:59.148: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:43:59.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6546z" for this suite. Dec 22 11:44:07.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:44:07.272: INFO: namespace: e2e-tests-statefulset-6546z, resource: bindings, ignored listing per whitelist Dec 22 11:44:07.415: INFO: namespace e2e-tests-statefulset-6546z deletion completed in 8.237886304s • [SLOW TEST:400.873 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:44:07.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-6v7n STEP: Creating a pod to test atomic-volume-subpath Dec 22 11:44:07.675: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6v7n" in namespace "e2e-tests-subpath-5vc7z" to be "success or failure" Dec 22 11:44:07.691: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 15.769451ms Dec 22 11:44:09.719: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04339467s Dec 22 11:44:11.734: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05867259s Dec 22 11:44:13.752: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077136149s Dec 22 11:44:15.768: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093033622s Dec 22 11:44:17.782: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106685314s Dec 22 11:44:19.805: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.130035487s Dec 22 11:44:22.202: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.52635207s Dec 22 11:44:24.236: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.560659056s Dec 22 11:44:26.249: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 18.57383158s Dec 22 11:44:28.269: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 20.594145437s Dec 22 11:44:30.285: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 22.609873391s Dec 22 11:44:32.298: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 24.622555394s Dec 22 11:44:34.310: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 26.634581376s Dec 22 11:44:36.331: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 28.65632582s Dec 22 11:44:38.351: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 30.67537246s Dec 22 11:44:40.370: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Running", Reason="", readiness=false. Elapsed: 32.694991175s Dec 22 11:44:42.467: INFO: Pod "pod-subpath-test-configmap-6v7n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.792064768s STEP: Saw pod success Dec 22 11:44:42.468: INFO: Pod "pod-subpath-test-configmap-6v7n" satisfied condition "success or failure" Dec 22 11:44:42.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-6v7n container test-container-subpath-configmap-6v7n: STEP: delete the pod Dec 22 11:44:42.655: INFO: Waiting for pod pod-subpath-test-configmap-6v7n to disappear Dec 22 11:44:42.856: INFO: Pod pod-subpath-test-configmap-6v7n no longer exists STEP: Deleting pod pod-subpath-test-configmap-6v7n Dec 22 11:44:42.857: INFO: Deleting pod "pod-subpath-test-configmap-6v7n" in namespace "e2e-tests-subpath-5vc7z" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:44:42.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5vc7z" for this suite. Dec 22 11:44:48.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:44:49.047: INFO: namespace: e2e-tests-subpath-5vc7z, resource: bindings, ignored listing per whitelist Dec 22 11:44:49.117: INFO: namespace e2e-tests-subpath-5vc7z deletion completed in 6.2185742s • [SLOW TEST:41.702 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:44:49.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 22 11:44:49.843: INFO: Waiting up to 5m0s for pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d" in namespace "e2e-tests-svcaccounts-pvfl4" to be "success or failure" Dec 22 11:44:49.993: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 149.251649ms Dec 22 11:44:52.504: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660342777s Dec 22 11:44:54.541: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698019688s Dec 22 11:44:57.068: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.224601209s Dec 22 11:44:59.085: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.241208524s Dec 22 11:45:01.529: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.685429224s Dec 22 11:45:03.547: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.703923812s Dec 22 11:45:06.146: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.302933573s Dec 22 11:45:08.162: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.318459473s Dec 22 11:45:10.186: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.342311543s STEP: Saw pod success Dec 22 11:45:10.186: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d" satisfied condition "success or failure" Dec 22 11:45:10.191: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d container token-test: STEP: delete the pod Dec 22 11:45:10.625: INFO: Waiting for pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d to disappear Dec 22 11:45:10.643: INFO: Pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-6tc7d no longer exists STEP: Creating a pod to test consume service account root CA Dec 22 11:45:10.668: INFO: Waiting up to 5m0s for pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6" in namespace "e2e-tests-svcaccounts-pvfl4" to be "success or failure" Dec 22 11:45:10.679: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.013195ms Dec 22 11:45:13.312: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644391349s Dec 22 11:45:15.338: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670440051s Dec 22 11:45:18.286: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.618238039s Dec 22 11:45:20.308: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.639646364s Dec 22 11:45:22.808: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.139663099s Dec 22 11:45:25.668: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.000033124s Dec 22 11:45:27.804: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.135965521s Dec 22 11:45:29.850: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.182366422s Dec 22 11:45:31.906: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.238190266s STEP: Saw pod success Dec 22 11:45:31.906: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6" satisfied condition "success or failure" Dec 22 11:45:31.920: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6 container root-ca-test: STEP: delete the pod Dec 22 11:45:32.535: INFO: Waiting for pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6 to disappear Dec 22 11:45:32.591: INFO: Pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-lqvm6 no longer exists STEP: Creating a pod to test consume service account namespace Dec 22 11:45:32.700: INFO: Waiting up to 5m0s for pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt" in namespace "e2e-tests-svcaccounts-pvfl4" to be "success or failure" Dec 22 11:45:32.709: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565208ms Dec 22 11:45:35.205: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504665399s Dec 22 11:45:37.217: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.516078998s Dec 22 11:45:39.863: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.162004369s Dec 22 11:45:42.272: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.570947697s Dec 22 11:45:44.289: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.588828544s Dec 22 11:45:46.304: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.603715881s Dec 22 11:45:48.316: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.615423977s Dec 22 11:45:50.344: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.643411552s STEP: Saw pod success Dec 22 11:45:50.344: INFO: Pod "pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt" satisfied condition "success or failure" Dec 22 11:45:50.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt container namespace-test: STEP: delete the pod Dec 22 11:45:51.357: INFO: Waiting for pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt to disappear Dec 22 11:45:51.660: INFO: Pod pod-service-account-75d6ff8c-24b0-11ea-b023-0242ac110005-mgftt no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:45:51.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-pvfl4" for this suite. Dec 22 11:45:59.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:46:00.019: INFO: namespace: e2e-tests-svcaccounts-pvfl4, resource: bindings, ignored listing per whitelist Dec 22 11:46:00.052: INFO: namespace e2e-tests-svcaccounts-pvfl4 deletion completed in 8.371031283s • [SLOW TEST:70.935 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:46:00.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b2smg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b2smg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 22 11:46:17.122: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.130: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.139: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.147: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.154: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.169: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.175: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.181: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.192: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.199: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.204: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.210: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005) Dec 22 11:46:17.210: INFO: Lookups using e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b2smg.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 22 11:46:22.342: INFO: DNS probes using e2e-tests-dns-b2smg/dns-test-9fe92f7e-24b0-11ea-b023-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:46:22.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-b2smg" for this suite. Dec 22 11:46:30.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:46:30.699: INFO: namespace: e2e-tests-dns-b2smg, resource: bindings, ignored listing per whitelist Dec 22 11:46:30.735: INFO: namespace e2e-tests-dns-b2smg deletion completed in 8.262182388s • [SLOW TEST:30.683 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:46:30.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-b2173e98-24b0-11ea-b023-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-b2173e5a-24b0-11ea-b023-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 22 11:46:30.955: INFO: Waiting up to 5m0s for pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-5ffvl" to be "success or failure" Dec 22 11:46:31.058: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.417486ms Dec 22 11:46:33.072: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11657476s Dec 22 11:46:36.541: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.585036791s Dec 22 11:46:38.567: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.611421974s Dec 22 11:46:40.609: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.653767233s Dec 22 11:46:42.654: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.698697496s STEP: Saw pod success Dec 22 11:46:42.655: INFO: Pod "projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:46:42.668: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005 container projected-all-volume-test: STEP: delete the pod Dec 22 11:46:42.778: INFO: Waiting for pod projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005 to disappear Dec 22 11:46:42.785: INFO: Pod projected-volume-b2173ca9-24b0-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:46:42.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5ffvl" for this suite. Dec 22 11:46:48.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:46:49.005: INFO: namespace: e2e-tests-projected-5ffvl, resource: bindings, ignored listing per whitelist Dec 22 11:46:49.106: INFO: namespace e2e-tests-projected-5ffvl deletion completed in 6.313708181s • [SLOW TEST:18.370 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:46:49.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:46:49.310: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:46:50.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-f8sw2" for this suite. Dec 22 11:46:56.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:46:56.970: INFO: namespace: e2e-tests-custom-resource-definition-f8sw2, resource: bindings, ignored listing per whitelist Dec 22 11:46:57.003: INFO: namespace e2e-tests-custom-resource-definition-f8sw2 deletion completed in 6.48858825s • [SLOW TEST:7.897 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:46:57.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-c1d0f4a9-24b0-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 11:46:57.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-grq4q" to be "success or failure" Dec 22 11:46:57.353: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.801198ms Dec 22 11:46:59.695: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353895557s Dec 22 11:47:01.720: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378228805s Dec 22 11:47:03.777: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435711054s Dec 22 11:47:05.788: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446853339s Dec 22 11:47:07.814: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472658459s Dec 22 11:47:10.282: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.94051155s STEP: Saw pod success Dec 22 11:47:10.282: INFO: Pod "pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:47:10.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 22 11:47:10.964: INFO: Waiting for pod pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005 to disappear Dec 22 11:47:11.202: INFO: Pod pod-projected-configmaps-c1d4aad3-24b0-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:47:11.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grq4q" for this suite. Dec 22 11:47:17.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:47:17.667: INFO: namespace: e2e-tests-projected-grq4q, resource: bindings, ignored listing per whitelist Dec 22 11:47:17.677: INFO: namespace e2e-tests-projected-grq4q deletion completed in 6.451316768s • [SLOW TEST:20.673 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:47:17.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-q4xp STEP: Creating a pod to test atomic-volume-subpath Dec 22 11:47:18.240: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q4xp" in namespace "e2e-tests-subpath-22dbh" to be "success or failure" Dec 22 11:47:18.280: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 39.598189ms Dec 22 11:47:20.542: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3019763s Dec 22 11:47:22.574: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333730015s Dec 22 11:47:24.900: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66027486s Dec 22 11:47:27.004: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763557341s Dec 22 11:47:29.140: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900029399s Dec 22 11:47:31.215: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.97460999s Dec 22 11:47:33.311: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.070974322s Dec 22 11:47:35.335: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Pending", Reason="", readiness=false. Elapsed: 17.095471486s Dec 22 11:47:37.351: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 19.111495563s Dec 22 11:47:39.385: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 21.145039153s Dec 22 11:47:41.408: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 23.168038175s Dec 22 11:47:43.433: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 25.192973718s Dec 22 11:47:45.472: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 27.231603255s Dec 22 11:47:47.498: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 29.258382159s Dec 22 11:47:49.513: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 31.273385236s Dec 22 11:47:51.531: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 33.291187077s Dec 22 11:47:53.551: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 35.311147842s Dec 22 11:47:55.570: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Running", Reason="", readiness=false. Elapsed: 37.330400928s Dec 22 11:47:57.843: INFO: Pod "pod-subpath-test-projected-q4xp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.603107368s STEP: Saw pod success Dec 22 11:47:57.843: INFO: Pod "pod-subpath-test-projected-q4xp" satisfied condition "success or failure" Dec 22 11:47:58.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-q4xp container test-container-subpath-projected-q4xp: STEP: delete the pod Dec 22 11:47:58.729: INFO: Waiting for pod pod-subpath-test-projected-q4xp to disappear Dec 22 11:47:58.794: INFO: Pod pod-subpath-test-projected-q4xp no longer exists STEP: Deleting pod pod-subpath-test-projected-q4xp Dec 22 11:47:58.794: INFO: Deleting pod "pod-subpath-test-projected-q4xp" in namespace "e2e-tests-subpath-22dbh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:47:58.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-22dbh" for this suite. Dec 22 11:48:06.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:48:06.891: INFO: namespace: e2e-tests-subpath-22dbh, resource: bindings, ignored listing per whitelist Dec 22 11:48:06.964: INFO: namespace e2e-tests-subpath-22dbh deletion completed in 8.156834738s • [SLOW TEST:49.287 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:48:06.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:48:07.165: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 22 11:48:07.256: INFO: Number of nodes with available pods: 0 Dec 22 11:48:07.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:09.209: INFO: Number of nodes with available pods: 0 Dec 22 11:48:09.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:09.711: INFO: Number of nodes with available pods: 0 Dec 22 11:48:09.711: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:10.414: INFO: Number of nodes with available pods: 0 Dec 22 11:48:10.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:11.303: INFO: Number of nodes with available pods: 0 Dec 22 11:48:11.303: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:12.280: INFO: Number of nodes with available pods: 0 Dec 22 11:48:12.280: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:13.283: INFO: Number of nodes with available pods: 0 Dec 22 11:48:13.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:14.466: INFO: Number of nodes with available pods: 0 Dec 22 11:48:14.467: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:15.308: INFO: Number of nodes with available pods: 0 Dec 22 11:48:15.309: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:16.383: INFO: Number of nodes with available pods: 0 Dec 22 11:48:16.383: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:17.285: INFO: Number of nodes with available pods: 0 Dec 22 11:48:17.285: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:18.309: INFO: Number of nodes with available pods: 1 Dec 22 11:48:18.310: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 22 11:48:18.531: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:19.657: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:20.589: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:21.943: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:22.615: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:23.938: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:24.612: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:25.584: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:25.585: INFO: Pod daemon-set-dz95d is not available Dec 22 11:48:26.620: INFO: Wrong image for pod: daemon-set-dz95d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 22 11:48:26.620: INFO: Pod daemon-set-dz95d is not available Dec 22 11:48:27.584: INFO: Pod daemon-set-grf28 is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 22 11:48:27.633: INFO: Number of nodes with available pods: 0 Dec 22 11:48:27.633: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:28.660: INFO: Number of nodes with available pods: 0 Dec 22 11:48:28.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:29.763: INFO: Number of nodes with available pods: 0 Dec 22 11:48:29.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:30.682: INFO: Number of nodes with available pods: 0 Dec 22 11:48:30.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:31.664: INFO: Number of nodes with available pods: 0 Dec 22 11:48:31.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:33.379: INFO: Number of nodes with available pods: 0 Dec 22 11:48:33.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:33.911: INFO: Number of nodes with available pods: 0 Dec 22 11:48:33.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:34.884: INFO: Number of nodes with available pods: 0 Dec 22 11:48:34.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:35.659: INFO: Number of nodes with available pods: 0 Dec 22 11:48:35.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 22 11:48:36.661: INFO: Number of nodes with available pods: 1 Dec 22 11:48:36.661: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gd859, will wait for the garbage collector to delete the pods Dec 22 11:48:36.803: INFO: Deleting DaemonSet.extensions daemon-set took: 59.632873ms Dec 22 11:48:36.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.769499ms Dec 22 11:48:43.618: INFO: Number of nodes with available pods: 0 Dec 22 11:48:43.618: INFO: Number of running nodes: 0, number of available pods: 0 Dec 22 11:48:43.635: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gd859/daemonsets","resourceVersion":"15674043"},"items":null} Dec 22 11:48:43.646: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gd859/pods","resourceVersion":"15674043"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:48:43.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gd859" for this suite. Dec 22 11:48:49.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:48:49.922: INFO: namespace: e2e-tests-daemonsets-gd859, resource: bindings, ignored listing per whitelist Dec 22 11:48:50.070: INFO: namespace e2e-tests-daemonsets-gd859 deletion completed in 6.329716464s • [SLOW TEST:43.106 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:48:50.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:48:50.296: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 22 11:48:50.396: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 22 11:48:55.410: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 22 11:49:01.424: INFO: Creating deployment "test-rolling-update-deployment" Dec 22 11:49:01.448: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 22 11:49:01.465: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 22 11:49:03.574: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 22 11:49:03.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 11:49:05.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 11:49:07.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 11:49:09.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712612141, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 22 11:49:12.365: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 22 11:49:12.687: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sbsr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sbsr9/deployments/test-rolling-update-deployment,UID:0bd1f333-24b1-11ea-a994-fa163e34d433,ResourceVersion:15674138,Generation:1,CreationTimestamp:2019-12-22 11:49:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-22 11:49:01 +0000 UTC 2019-12-22 11:49:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-22 11:49:11 +0000 UTC 2019-12-22 11:49:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 22 11:49:12.699: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sbsr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sbsr9/replicasets/test-rolling-update-deployment-75db98fb4c,UID:0bdab69b-24b1-11ea-a994-fa163e34d433,ResourceVersion:15674129,Generation:1,CreationTimestamp:2019-12-22 11:49:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0bd1f333-24b1-11ea-a994-fa163e34d433 0xc0021cfad7 0xc0021cfad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 22 11:49:12.699: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 22 11:49:12.700: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sbsr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sbsr9/replicasets/test-rolling-update-controller,UID:052f4dab-24b1-11ea-a994-fa163e34d433,ResourceVersion:15674137,Generation:2,CreationTimestamp:2019-12-22 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0bd1f333-24b1-11ea-a994-fa163e34d433 0xc0021cf9ff 0xc0021cfa10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 22 11:49:12.715: INFO: Pod "test-rolling-update-deployment-75db98fb4c-n9p62" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-n9p62,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sbsr9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sbsr9/pods/test-rolling-update-deployment-75db98fb4c-n9p62,UID:0bedca9d-24b1-11ea-a994-fa163e34d433,ResourceVersion:15674128,Generation:0,CreationTimestamp:2019-12-22 11:49:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 0bdab69b-24b1-11ea-a994-fa163e34d433 0xc0026523b7 0xc0026523b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-24zfn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-24zfn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-24zfn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002652420} {node.kubernetes.io/unreachable Exists NoExecute 0xc002652440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:49:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:49:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:49:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 11:49:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-22 11:49:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-22 11:49:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c22e4a23d903d8a2b7f539c3466453e1ebff14d1b510ca1d72ad706425cef6e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:49:12.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sbsr9" for this suite. Dec 22 11:49:21.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:49:22.104: INFO: namespace: e2e-tests-deployment-sbsr9, resource: bindings, ignored listing per whitelist Dec 22 11:49:22.305: INFO: namespace e2e-tests-deployment-sbsr9 deletion completed in 9.571483678s • [SLOW TEST:32.235 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:49:22.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-18644428-24b1-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 11:49:22.563: INFO: Waiting up to 5m0s for pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-7sk7n" to be "success or failure" Dec 22 11:49:22.726: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 162.820393ms Dec 22 11:49:24.836: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272214534s Dec 22 11:49:26.872: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308261705s Dec 22 11:49:28.935: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371087557s Dec 22 11:49:30.950: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386506958s Dec 22 11:49:32.969: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.405312985s Dec 22 11:49:34.998: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.4344588s STEP: Saw pod success Dec 22 11:49:34.998: INFO: Pod "pod-secrets-18666040-24b1-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:49:35.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-18666040-24b1-11ea-b023-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 22 11:49:35.102: INFO: Waiting for pod pod-secrets-18666040-24b1-11ea-b023-0242ac110005 to disappear Dec 22 11:49:35.122: INFO: Pod pod-secrets-18666040-24b1-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:49:35.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7sk7n" for this suite. Dec 22 11:49:41.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:49:41.342: INFO: namespace: e2e-tests-secrets-7sk7n, resource: bindings, ignored listing per whitelist Dec 22 11:49:41.585: INFO: namespace e2e-tests-secrets-7sk7n deletion completed in 6.349869573s • [SLOW TEST:19.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:49:41.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-23eeabde-24b1-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 11:49:42.059: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-zcqrz" to be "success or failure" Dec 22 11:49:42.092: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.895692ms Dec 22 11:49:44.110: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051288295s Dec 22 11:49:46.139: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079472061s Dec 22 11:49:48.467: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407649351s Dec 22 11:49:50.485: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.425721196s Dec 22 11:49:52.515: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.455633734s STEP: Saw pod success Dec 22 11:49:52.515: INFO: Pod "pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:49:52.550: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 22 11:49:52.922: INFO: Waiting for pod pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005 to disappear Dec 22 11:49:52.942: INFO: Pod pod-projected-configmaps-23f095d2-24b1-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:49:52.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zcqrz" for this suite. Dec 22 11:49:59.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:49:59.165: INFO: namespace: e2e-tests-projected-zcqrz, resource: bindings, ignored listing per whitelist Dec 22 11:49:59.317: INFO: namespace e2e-tests-projected-zcqrz deletion completed in 6.363570852s • [SLOW TEST:17.732 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:49:59.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-2e7244ff-24b1-11ea-b023-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:50:13.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d4pdw" for this suite. Dec 22 11:50:37.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:50:37.971: INFO: namespace: e2e-tests-configmap-d4pdw, resource: bindings, ignored listing per whitelist Dec 22 11:50:38.012: INFO: namespace e2e-tests-configmap-d4pdw deletion completed in 24.205123789s • [SLOW TEST:38.695 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:50:38.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-45862cf2-24b1-11ea-b023-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 22 11:50:38.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-trw4b" to be "success or failure" Dec 22 11:50:38.279: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635049ms Dec 22 11:50:40.291: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01850781s Dec 22 11:50:42.317: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045124964s Dec 22 11:50:44.799: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526870586s Dec 22 11:50:46.819: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546393006s Dec 22 11:50:48.837: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.564689226s STEP: Saw pod success Dec 22 11:50:48.837: INFO: Pod "pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 11:50:48.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005 container configmap-volume-test: STEP: delete the pod Dec 22 11:50:49.120: INFO: Waiting for pod pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005 to disappear Dec 22 11:50:49.131: INFO: Pod pod-configmaps-458763e4-24b1-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:50:49.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-trw4b" for this suite. Dec 22 11:50:55.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:50:55.553: INFO: namespace: e2e-tests-configmap-trw4b, resource: bindings, ignored listing per whitelist Dec 22 11:50:55.613: INFO: namespace e2e-tests-configmap-trw4b deletion completed in 6.469452915s • [SLOW TEST:17.600 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:50:55.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:50:55.839: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:51:08.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xw9sl" for this suite. Dec 22 11:51:54.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:51:54.676: INFO: namespace: e2e-tests-pods-xw9sl, resource: bindings, ignored listing per whitelist Dec 22 11:51:54.755: INFO: namespace e2e-tests-pods-xw9sl deletion completed in 46.447261496s • [SLOW TEST:59.141 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:51:54.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 11:51:55.045: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 22 11:52:00.063: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 22 11:52:06.406: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 22 11:52:06.486: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-dkm6n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dkm6n/deployments/test-cleanup-deployment,UID:7a15522c-24b1-11ea-a994-fa163e34d433,ResourceVersion:15674515,Generation:1,CreationTimestamp:2019-12-22 11:52:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Dec 22 11:52:06.492: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:52:06.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dkm6n" for this suite. Dec 22 11:52:14.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:52:14.955: INFO: namespace: e2e-tests-deployment-dkm6n, resource: bindings, ignored listing per whitelist Dec 22 11:52:15.071: INFO: namespace e2e-tests-deployment-dkm6n deletion completed in 8.419217658s • [SLOW TEST:20.315 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:52:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:53:25.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-xx2m4" for this suite. Dec 22 11:53:31.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:53:31.636: INFO: namespace: e2e-tests-container-runtime-xx2m4, resource: bindings, ignored listing per whitelist Dec 22 11:53:31.649: INFO: namespace e2e-tests-container-runtime-xx2m4 deletion completed in 6.195866297s • [SLOW TEST:76.577 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:53:31.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Dec 22 11:53:31.841: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix272085326/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:53:31.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nkf77" for this suite. Dec 22 11:53:38.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:53:38.088: INFO: namespace: e2e-tests-kubectl-nkf77, resource: bindings, ignored listing per whitelist Dec 22 11:53:38.234: INFO: namespace e2e-tests-kubectl-nkf77 deletion completed in 6.222706044s • [SLOW TEST:6.584 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:53:38.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 22 11:53:51.342: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b0f0743e-24b1-11ea-b023-0242ac110005" Dec 22 11:53:51.342: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b0f0743e-24b1-11ea-b023-0242ac110005" in namespace "e2e-tests-pods-2r52z" to be "terminated due to deadline exceeded" Dec 22 11:53:51.370: INFO: Pod "pod-update-activedeadlineseconds-b0f0743e-24b1-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 27.426178ms Dec 22 11:53:53.382: INFO: Pod "pod-update-activedeadlineseconds-b0f0743e-24b1-11ea-b023-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.039916564s Dec 22 11:53:53.382: INFO: Pod "pod-update-activedeadlineseconds-b0f0743e-24b1-11ea-b023-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:53:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2r52z" for this suite. Dec 22 11:54:01.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:54:01.675: INFO: namespace: e2e-tests-pods-2r52z, resource: bindings, ignored listing per whitelist Dec 22 11:54:01.762: INFO: namespace e2e-tests-pods-2r52z deletion completed in 8.373961539s • [SLOW TEST:23.527 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:54:01.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1222 11:54:32.723874 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 22 11:54:32.724: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:54:32.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6rfcz" for this suite. Dec 22 11:54:41.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:54:41.982: INFO: namespace: e2e-tests-gc-6rfcz, resource: bindings, ignored listing per whitelist Dec 22 11:54:42.094: INFO: namespace e2e-tests-gc-6rfcz deletion completed in 9.348857872s • [SLOW TEST:40.332 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:54:42.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 22 11:54:57.386: INFO: Successfully updated pod "pod-update-d728e501-24b1-11ea-b023-0242ac110005" STEP: verifying the updated pod is in kubernetes Dec 22 11:54:57.431: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:54:57.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ttkkc" for this suite. Dec 22 11:55:23.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:55:23.737: INFO: namespace: e2e-tests-pods-ttkkc, resource: bindings, ignored listing per whitelist Dec 22 11:55:24.186: INFO: namespace e2e-tests-pods-ttkkc deletion completed in 26.560474924s • [SLOW TEST:42.091 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:55:24.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-lh6s2 Dec 22 11:55:36.517: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-lh6s2 STEP: checking the pod's current state and verifying that restartCount is present Dec 22 11:55:36.524: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 11:59:37.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lh6s2" for this suite. Dec 22 11:59:45.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 11:59:45.712: INFO: namespace: e2e-tests-container-probe-lh6s2, resource: bindings, ignored listing per whitelist Dec 22 11:59:45.847: INFO: namespace e2e-tests-container-probe-lh6s2 deletion completed in 8.429163547s • [SLOW TEST:261.660 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 11:59:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 22 11:59:56.595: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:00:23.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-6n6p7" for this suite. Dec 22 12:00:30.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:00:30.109: INFO: namespace: e2e-tests-namespaces-6n6p7, resource: bindings, ignored listing per whitelist Dec 22 12:00:30.181: INFO: namespace e2e-tests-namespaces-6n6p7 deletion completed in 6.216317723s STEP: Destroying namespace "e2e-tests-nsdeletetest-ms5ng" for this suite. Dec 22 12:00:30.189: INFO: Namespace e2e-tests-nsdeletetest-ms5ng was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qszgc" for this suite. Dec 22 12:00:36.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:00:36.305: INFO: namespace: e2e-tests-nsdeletetest-qszgc, resource: bindings, ignored listing per whitelist Dec 22 12:00:36.445: INFO: namespace e2e-tests-nsdeletetest-qszgc deletion completed in 6.256364905s • [SLOW TEST:50.598 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:00:36.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 22 12:00:47.545: INFO: Successfully updated pod "labelsupdateaa452d4d-24b2-11ea-b023-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:00:49.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kr6k6" for this suite. Dec 22 12:01:29.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:01:29.936: INFO: namespace: e2e-tests-projected-kr6k6, resource: bindings, ignored listing per whitelist Dec 22 12:01:29.972: INFO: namespace e2e-tests-projected-kr6k6 deletion completed in 40.237840256s • [SLOW TEST:53.526 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:01:29.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:01:44.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-z8ff4" for this suite. Dec 22 12:02:08.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:02:08.207: INFO: namespace: e2e-tests-replication-controller-z8ff4, resource: bindings, ignored listing per whitelist Dec 22 12:02:08.312: INFO: namespace e2e-tests-replication-controller-z8ff4 deletion completed in 24.233459696s • [SLOW TEST:38.340 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:02:08.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1222 12:02:52.262353 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 22 12:02:52.262: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:02:52.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5676h" for this suite. Dec 22 12:03:03.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:03:05.024: INFO: namespace: e2e-tests-gc-5676h, resource: bindings, ignored listing per whitelist Dec 22 12:03:05.057: INFO: namespace e2e-tests-gc-5676h deletion completed in 12.778164665s • [SLOW TEST:56.745 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:03:05.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 22 12:03:06.485: INFO: Waiting up to 5m0s for pod "pod-03085f72-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-2lf2b" to be "success or failure" Dec 22 12:03:06.740: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 254.084048ms Dec 22 12:03:08.779: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293052922s Dec 22 12:03:10.807: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321884376s Dec 22 12:03:13.120: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634407388s Dec 22 12:03:15.135: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.649785813s Dec 22 12:03:17.866: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.380903141s Dec 22 12:03:19.899: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.413190478s Dec 22 12:03:21.919: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.43300221s Dec 22 12:03:24.423: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.937755265s Dec 22 12:03:26.945: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.459206552s Dec 22 12:03:28.952: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.466342011s Dec 22 12:03:30.969: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.483726496s STEP: Saw pod success Dec 22 12:03:30.970: INFO: Pod "pod-03085f72-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 12:03:30.974: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-03085f72-24b3-11ea-b023-0242ac110005 container test-container: STEP: delete the pod Dec 22 12:03:31.114: INFO: Waiting for pod pod-03085f72-24b3-11ea-b023-0242ac110005 to disappear Dec 22 12:03:31.199: INFO: Pod pod-03085f72-24b3-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:03:31.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2lf2b" for this suite. Dec 22 12:03:37.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:03:37.416: INFO: namespace: e2e-tests-emptydir-2lf2b, resource: bindings, ignored listing per whitelist Dec 22 12:03:37.470: INFO: namespace e2e-tests-emptydir-2lf2b deletion completed in 6.254149449s • [SLOW TEST:32.413 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:03:37.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-161990f7-24b3-11ea-b023-0242ac110005 STEP: Creating a pod to test consume secrets Dec 22 12:03:37.796: INFO: Waiting up to 5m0s for pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-x9czp" to be "success or failure" Dec 22 12:03:37.852: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.209482ms Dec 22 12:03:40.356: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.559237438s Dec 22 12:03:42.376: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579189131s Dec 22 12:03:44.388: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591618591s Dec 22 12:03:46.923: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.126751845s Dec 22 12:03:48.951: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.154224391s Dec 22 12:03:50.969: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.172546389s STEP: Saw pod success Dec 22 12:03:50.969: INFO: Pod "pod-secrets-1629a312-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 12:03:50.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1629a312-24b3-11ea-b023-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 22 12:03:51.142: INFO: Waiting for pod pod-secrets-1629a312-24b3-11ea-b023-0242ac110005 to disappear Dec 22 12:03:51.169: INFO: Pod pod-secrets-1629a312-24b3-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:03:51.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x9czp" for this suite. Dec 22 12:03:59.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:03:59.370: INFO: namespace: e2e-tests-secrets-x9czp, resource: bindings, ignored listing per whitelist Dec 22 12:03:59.465: INFO: namespace e2e-tests-secrets-x9czp deletion completed in 8.255624519s • [SLOW TEST:21.994 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:03:59.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-mftcn STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-mftcn STEP: Deleting pre-stop pod Dec 22 12:04:26.945: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:04:26.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-mftcn" for this suite. Dec 22 12:05:07.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:05:07.289: INFO: namespace: e2e-tests-prestop-mftcn, resource: bindings, ignored listing per whitelist Dec 22 12:05:07.333: INFO: namespace e2e-tests-prestop-mftcn deletion completed in 40.280911402s • [SLOW TEST:67.868 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:05:07.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5dw2 STEP: Creating a pod to test atomic-volume-subpath Dec 22 12:05:07.768: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5dw2" in namespace "e2e-tests-subpath-vbmw9" to be "success or failure" Dec 22 12:05:07.889: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 121.302536ms Dec 22 12:05:09.964: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195448476s Dec 22 12:05:12.027: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259310342s Dec 22 12:05:14.114: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345813451s Dec 22 12:05:16.147: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378763593s Dec 22 12:05:18.509: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.7410791s Dec 22 12:05:20.566: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.797541939s Dec 22 12:05:22.810: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.041806176s Dec 22 12:05:25.172: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.40384308s Dec 22 12:05:27.186: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 19.417791847s Dec 22 12:05:29.205: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 21.436511872s Dec 22 12:05:31.219: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 23.45080185s Dec 22 12:05:33.237: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 25.469248961s Dec 22 12:05:35.258: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 27.490152159s Dec 22 12:05:37.277: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 29.508744332s Dec 22 12:05:39.295: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 31.52658908s Dec 22 12:05:41.318: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 33.549681148s Dec 22 12:05:43.335: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Running", Reason="", readiness=false. Elapsed: 35.566953687s Dec 22 12:05:45.969: INFO: Pod "pod-subpath-test-configmap-5dw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.200568048s STEP: Saw pod success Dec 22 12:05:45.969: INFO: Pod "pod-subpath-test-configmap-5dw2" satisfied condition "success or failure" Dec 22 12:05:45.980: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-5dw2 container test-container-subpath-configmap-5dw2: STEP: delete the pod Dec 22 12:05:46.534: INFO: Waiting for pod pod-subpath-test-configmap-5dw2 to disappear Dec 22 12:05:46.551: INFO: Pod pod-subpath-test-configmap-5dw2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5dw2 Dec 22 12:05:46.552: INFO: Deleting pod "pod-subpath-test-configmap-5dw2" in namespace "e2e-tests-subpath-vbmw9" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:05:46.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vbmw9" for this suite. Dec 22 12:05:54.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:05:54.717: INFO: namespace: e2e-tests-subpath-vbmw9, resource: bindings, ignored listing per whitelist Dec 22 12:05:54.793: INFO: namespace e2e-tests-subpath-vbmw9 deletion completed in 8.226798646s • [SLOW TEST:47.459 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:05:54.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-lv46 STEP: Creating a pod to test atomic-volume-subpath Dec 22 12:05:55.057: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lv46" in namespace "e2e-tests-subpath-hhj9c" to be "success or failure" Dec 22 12:05:55.126: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 68.7424ms Dec 22 12:05:57.251: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194024411s Dec 22 12:05:59.275: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217911759s Dec 22 12:06:01.760: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703032681s Dec 22 12:06:03.787: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729541803s Dec 22 12:06:05.832: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.774283336s Dec 22 12:06:07.868: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 12.810500259s Dec 22 12:06:09.887: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 14.829933433s Dec 22 12:06:11.940: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Pending", Reason="", readiness=false. Elapsed: 16.882373031s Dec 22 12:06:13.967: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 18.909577793s Dec 22 12:06:15.991: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 20.933863673s Dec 22 12:06:18.012: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 22.954322698s Dec 22 12:06:20.026: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 24.968599459s Dec 22 12:06:22.073: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 27.016052459s Dec 22 12:06:24.085: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 29.027823518s Dec 22 12:06:26.106: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 31.048774188s Dec 22 12:06:28.123: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 33.066209081s Dec 22 12:06:30.155: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Running", Reason="", readiness=false. Elapsed: 35.098126655s Dec 22 12:06:32.982: INFO: Pod "pod-subpath-test-secret-lv46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.924554336s STEP: Saw pod success Dec 22 12:06:32.982: INFO: Pod "pod-subpath-test-secret-lv46" satisfied condition "success or failure" Dec 22 12:06:33.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-lv46 container test-container-subpath-secret-lv46: STEP: delete the pod Dec 22 12:06:33.881: INFO: Waiting for pod pod-subpath-test-secret-lv46 to disappear Dec 22 12:06:33.977: INFO: Pod pod-subpath-test-secret-lv46 no longer exists STEP: Deleting pod pod-subpath-test-secret-lv46 Dec 22 12:06:33.977: INFO: Deleting pod "pod-subpath-test-secret-lv46" in namespace "e2e-tests-subpath-hhj9c" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:06:33.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hhj9c" for this suite. Dec 22 12:06:40.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:06:40.044: INFO: namespace: e2e-tests-subpath-hhj9c, resource: bindings, ignored listing per whitelist Dec 22 12:06:40.184: INFO: namespace e2e-tests-subpath-hhj9c deletion completed in 6.195302715s • [SLOW TEST:45.390 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:06:40.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 22 12:06:40.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-lsrnx" to be "success or failure" Dec 22 12:06:40.713: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 224.811697ms Dec 22 12:06:42.787: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29867362s Dec 22 12:06:44.809: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320953661s Dec 22 12:06:47.013: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524892287s Dec 22 12:06:49.452: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963897471s Dec 22 12:06:51.472: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.983749219s Dec 22 12:06:53.485: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.997186958s STEP: Saw pod success Dec 22 12:06:53.485: INFO: Pod "downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure" Dec 22 12:06:53.490: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005 container client-container: STEP: delete the pod Dec 22 12:06:54.773: INFO: Waiting for pod downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005 to disappear Dec 22 12:06:54.972: INFO: Pod downwardapi-volume-8307f6f2-24b3-11ea-b023-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:06:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lsrnx" for this suite. Dec 22 12:07:03.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:07:03.333: INFO: namespace: e2e-tests-projected-lsrnx, resource: bindings, ignored listing per whitelist Dec 22 12:07:03.453: INFO: namespace e2e-tests-projected-lsrnx deletion completed in 8.462986423s • [SLOW TEST:23.268 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:07:03.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-90e50e13-24b3-11ea-b023-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-90e50f05-24b3-11ea-b023-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-90e50e13-24b3-11ea-b023-0242ac110005 STEP: Updating configmap cm-test-opt-upd-90e50f05-24b3-11ea-b023-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-90e50f49-24b3-11ea-b023-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 22 12:08:53.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2dpm9" for this suite. Dec 22 12:09:17.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 22 12:09:17.437: INFO: namespace: e2e-tests-projected-2dpm9, resource: bindings, ignored listing per whitelist Dec 22 12:09:17.604: INFO: namespace e2e-tests-projected-2dpm9 deletion completed in 24.310733357s • [SLOW TEST:134.151 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 22 12:09:17.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 22 12:09:17.811: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.932867ms)
Dec 22 12:09:17.819: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.008ms)
Dec 22 12:09:17.829: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.760448ms)
Dec 22 12:09:17.834: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.707204ms)
Dec 22 12:09:17.919: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 84.355427ms)
Dec 22 12:09:17.933: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.585918ms)
Dec 22 12:09:17.958: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.243187ms)
Dec 22 12:09:17.968: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.860371ms)
Dec 22 12:09:17.974: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.149998ms)
Dec 22 12:09:17.979: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.072378ms)
Dec 22 12:09:17.985: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.438224ms)
Dec 22 12:09:17.990: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.474336ms)
Dec 22 12:09:17.997: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.325623ms)
Dec 22 12:09:18.003: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.036824ms)
Dec 22 12:09:18.057: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.15053ms)
Dec 22 12:09:18.064: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.801042ms)
Dec 22 12:09:18.070: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.332736ms)
Dec 22 12:09:18.076: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.290208ms)
Dec 22 12:09:18.083: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.215435ms)
Dec 22 12:09:18.088: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.577732ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:09:18.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-k5glq" for this suite.
Dec 22 12:09:24.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:09:24.376: INFO: namespace: e2e-tests-proxy-k5glq, resource: bindings, ignored listing per whitelist
Dec 22 12:09:24.392: INFO: namespace e2e-tests-proxy-k5glq deletion completed in 6.297333439s

• [SLOW TEST:6.788 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:09:24.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:09:24.643: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-slksf" to be "success or failure"
Dec 22 12:09:24.656: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.382596ms
Dec 22 12:09:26.982: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339002342s
Dec 22 12:09:29.010: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36675366s
Dec 22 12:09:31.592: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.948844039s
Dec 22 12:09:33.614: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971101363s
Dec 22 12:09:35.647: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.004402569s
Dec 22 12:09:37.664: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.021171973s
STEP: Saw pod success
Dec 22 12:09:37.664: INFO: Pod "downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:09:37.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:09:38.331: INFO: Waiting for pod downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005 to disappear
Dec 22 12:09:38.337: INFO: Pod downwardapi-volume-e4e3f5a9-24b3-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:09:38.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-slksf" for this suite.
Dec 22 12:09:44.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:09:44.747: INFO: namespace: e2e-tests-downward-api-slksf, resource: bindings, ignored listing per whitelist
Dec 22 12:09:44.761: INFO: namespace e2e-tests-downward-api-slksf deletion completed in 6.412835472s

• [SLOW TEST:20.368 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:09:44.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 22 12:09:44.984: INFO: Waiting up to 5m0s for pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-2nb4l" to be "success or failure"
Dec 22 12:09:44.998: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.912107ms
Dec 22 12:09:47.428: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443618472s
Dec 22 12:09:49.444: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460347861s
Dec 22 12:09:51.716: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73239636s
Dec 22 12:09:53.740: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.755967737s
Dec 22 12:09:55.802: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.818413627s
Dec 22 12:09:58.353: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.369390874s
STEP: Saw pod success
Dec 22 12:09:58.354: INFO: Pod "downward-api-f105971e-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:09:58.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f105971e-24b3-11ea-b023-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 22 12:09:59.411: INFO: Waiting for pod downward-api-f105971e-24b3-11ea-b023-0242ac110005 to disappear
Dec 22 12:09:59.419: INFO: Pod downward-api-f105971e-24b3-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:09:59.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2nb4l" for this suite.
Dec 22 12:10:05.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:10:05.543: INFO: namespace: e2e-tests-downward-api-2nb4l, resource: bindings, ignored listing per whitelist
Dec 22 12:10:05.707: INFO: namespace e2e-tests-downward-api-2nb4l deletion completed in 6.275080669s

• [SLOW TEST:20.946 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:10:05.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 22 12:10:06.139: INFO: Waiting up to 5m0s for pod "pod-fd974d09-24b3-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-ssjch" to be "success or failure"
Dec 22 12:10:06.158: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.774711ms
Dec 22 12:10:08.171: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032336328s
Dec 22 12:10:10.188: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049168137s
Dec 22 12:10:12.209: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070451957s
Dec 22 12:10:14.286: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147246965s
Dec 22 12:10:16.308: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169308821s
Dec 22 12:10:18.322: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.183325641s
STEP: Saw pod success
Dec 22 12:10:18.322: INFO: Pod "pod-fd974d09-24b3-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:10:18.358: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fd974d09-24b3-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:10:18.566: INFO: Waiting for pod pod-fd974d09-24b3-11ea-b023-0242ac110005 to disappear
Dec 22 12:10:18.749: INFO: Pod pod-fd974d09-24b3-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:10:18.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ssjch" for this suite.
Dec 22 12:10:24.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:10:25.036: INFO: namespace: e2e-tests-emptydir-ssjch, resource: bindings, ignored listing per whitelist
Dec 22 12:10:25.076: INFO: namespace e2e-tests-emptydir-ssjch deletion completed in 6.313794219s

• [SLOW TEST:19.368 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:10:25.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W1222 12:10:28.421293       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 12:10:28.421: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:10:28.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qkvgv" for this suite.
Dec 22 12:10:35.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:10:35.423: INFO: namespace: e2e-tests-gc-qkvgv, resource: bindings, ignored listing per whitelist
Dec 22 12:10:35.465: INFO: namespace e2e-tests-gc-qkvgv deletion completed in 7.032026133s

• [SLOW TEST:10.389 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:10:35.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:10:35.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mj69j" for this suite.
Dec 22 12:11:00.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:11:00.134: INFO: namespace: e2e-tests-pods-mj69j, resource: bindings, ignored listing per whitelist
Dec 22 12:11:00.194: INFO: namespace e2e-tests-pods-mj69j deletion completed in 24.381197428s

• [SLOW TEST:24.729 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:11:00.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:11:00.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-7bh57" to be "success or failure"
Dec 22 12:11:00.451: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.069558ms
Dec 22 12:11:02.472: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051690461s
Dec 22 12:11:04.508: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08785325s
Dec 22 12:11:07.151: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.730492362s
Dec 22 12:11:09.278: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.856869571s
Dec 22 12:11:11.672: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.251756075s
Dec 22 12:11:14.360: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.939846596s
STEP: Saw pod success
Dec 22 12:11:14.361: INFO: Pod "downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:11:14.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:11:14.673: INFO: Waiting for pod downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005 to disappear
Dec 22 12:11:14.741: INFO: Pod downwardapi-volume-1dfc82f6-24b4-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:11:14.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7bh57" for this suite.
Dec 22 12:11:21.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:11:21.145: INFO: namespace: e2e-tests-projected-7bh57, resource: bindings, ignored listing per whitelist
Dec 22 12:11:21.176: INFO: namespace e2e-tests-projected-7bh57 deletion completed in 6.239972878s

• [SLOW TEST:20.981 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:11:21.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-2a719640-24b4-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:11:21.392: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-hfs9w" to be "success or failure"
Dec 22 12:11:21.443: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.695181ms
Dec 22 12:11:24.232: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83982787s
Dec 22 12:11:26.258: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.86584013s
Dec 22 12:11:28.903: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.511259829s
Dec 22 12:11:30.923: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.530673344s
Dec 22 12:11:32.974: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.58244378s
Dec 22 12:11:35.000: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.608449823s
STEP: Saw pod success
Dec 22 12:11:35.000: INFO: Pod "pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:11:35.005: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 12:11:35.215: INFO: Waiting for pod pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005 to disappear
Dec 22 12:11:35.227: INFO: Pod pod-projected-secrets-2a733ed2-24b4-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:11:35.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hfs9w" for this suite.
Dec 22 12:11:41.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:11:41.387: INFO: namespace: e2e-tests-projected-hfs9w, resource: bindings, ignored listing per whitelist
Dec 22 12:11:41.434: INFO: namespace e2e-tests-projected-hfs9w deletion completed in 6.198732562s

• [SLOW TEST:20.258 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:11:41.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:11:41.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-gr4sj" to be "success or failure"
Dec 22 12:11:41.719: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83263ms
Dec 22 12:11:43.973: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26073694s
Dec 22 12:11:45.987: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27462987s
Dec 22 12:11:48.140: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427458299s
Dec 22 12:11:50.157: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.444184285s
Dec 22 12:11:52.175: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.462343053s
Dec 22 12:11:54.191: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.478387648s
STEP: Saw pod success
Dec 22 12:11:54.191: INFO: Pod "downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:11:54.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:11:54.489: INFO: Waiting for pod downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005 to disappear
Dec 22 12:11:54.505: INFO: Pod downwardapi-volume-3699a8cd-24b4-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:11:54.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gr4sj" for this suite.
Dec 22 12:12:00.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:12:00.883: INFO: namespace: e2e-tests-downward-api-gr4sj, resource: bindings, ignored listing per whitelist
Dec 22 12:12:00.890: INFO: namespace e2e-tests-downward-api-gr4sj deletion completed in 6.367917532s

• [SLOW TEST:19.454 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:12:00.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 22 12:12:01.065: INFO: Waiting up to 5m0s for pod "pod-4223727f-24b4-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-bmt2h" to be "success or failure"
Dec 22 12:12:01.086: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.127855ms
Dec 22 12:12:03.605: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539907792s
Dec 22 12:12:05.616: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551255456s
Dec 22 12:12:08.055: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.989446185s
Dec 22 12:12:10.072: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006770516s
Dec 22 12:12:12.092: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.026440087s
Dec 22 12:12:14.180: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.114482234s
STEP: Saw pod success
Dec 22 12:12:14.180: INFO: Pod "pod-4223727f-24b4-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:12:14.193: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4223727f-24b4-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:12:14.390: INFO: Waiting for pod pod-4223727f-24b4-11ea-b023-0242ac110005 to disappear
Dec 22 12:12:14.401: INFO: Pod pod-4223727f-24b4-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:12:14.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bmt2h" for this suite.
Dec 22 12:12:20.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:12:20.738: INFO: namespace: e2e-tests-emptydir-bmt2h, resource: bindings, ignored listing per whitelist
Dec 22 12:12:20.767: INFO: namespace e2e-tests-emptydir-bmt2h deletion completed in 6.356657526s

• [SLOW TEST:19.877 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:12:20.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 22 12:12:20.982: INFO: Waiting up to 5m0s for pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-srs5c" to be "success or failure"
Dec 22 12:12:20.998: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.527846ms
Dec 22 12:12:23.440: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458272742s
Dec 22 12:12:25.477: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494570569s
Dec 22 12:12:27.654: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.671994812s
Dec 22 12:12:29.674: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692221682s
Dec 22 12:12:31.691: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.708505671s
Dec 22 12:12:33.706: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.724347674s
STEP: Saw pod success
Dec 22 12:12:33.707: INFO: Pod "pod-4e01fba7-24b4-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:12:33.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4e01fba7-24b4-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:12:33.985: INFO: Waiting for pod pod-4e01fba7-24b4-11ea-b023-0242ac110005 to disappear
Dec 22 12:12:33.997: INFO: Pod pod-4e01fba7-24b4-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:12:33.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-srs5c" for this suite.
Dec 22 12:12:40.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:12:40.200: INFO: namespace: e2e-tests-emptydir-srs5c, resource: bindings, ignored listing per whitelist
Dec 22 12:12:40.336: INFO: namespace e2e-tests-emptydir-srs5c deletion completed in 6.253801668s

• [SLOW TEST:19.569 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:12:40.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 22 12:12:40.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676938,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 12:12:40.921: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676940,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 22 12:12:40.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676941,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 22 12:12:51.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676955,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 12:12:51.190: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676956,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 22 12:12:51.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-q7kjx,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7kjx/configmaps/e2e-watch-test-label-changed,UID:59deec11-24b4-11ea-a994-fa163e34d433,ResourceVersion:15676957,Generation:0,CreationTimestamp:2019-12-22 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:12:51.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-q7kjx" for this suite.
Dec 22 12:12:57.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:12:57.494: INFO: namespace: e2e-tests-watch-q7kjx, resource: bindings, ignored listing per whitelist
Dec 22 12:12:57.546: INFO: namespace e2e-tests-watch-q7kjx deletion completed in 6.306483868s

• [SLOW TEST:17.210 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:12:57.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 22 12:16:02.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:02.404: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:04.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:04.418: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:06.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:06.649: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:08.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:08.418: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:10.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:10.426: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:12.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:12.416: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:14.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:14.425: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:16.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:16.416: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:18.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:18.427: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:20.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:20.426: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:22.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:22.422: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:24.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:24.424: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:26.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:26.428: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:28.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:28.426: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:30.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:30.424: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:32.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:32.491: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:34.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:34.421: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:36.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:36.429: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 22 12:16:38.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 22 12:16:38.419: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:16:38.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7cvr6" for this suite.
Dec 22 12:17:02.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:17:02.707: INFO: namespace: e2e-tests-container-lifecycle-hook-7cvr6, resource: bindings, ignored listing per whitelist
Dec 22 12:17:02.950: INFO: namespace e2e-tests-container-lifecycle-hook-7cvr6 deletion completed in 24.519466233s

• [SLOW TEST:245.403 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:17:02.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1222 12:17:13.244809       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 12:17:13.245: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:17:13.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-288ql" for this suite.
Dec 22 12:17:19.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:17:19.900: INFO: namespace: e2e-tests-gc-288ql, resource: bindings, ignored listing per whitelist
Dec 22 12:17:19.918: INFO: namespace e2e-tests-gc-288ql deletion completed in 6.647741995s

• [SLOW TEST:16.969 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:17:19.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0052740b-24b5-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:17:20.285: INFO: Waiting up to 5m0s for pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-6zhsv" to be "success or failure"
Dec 22 12:17:20.312: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.457099ms
Dec 22 12:17:22.323: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037374885s
Dec 22 12:17:24.375: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089132191s
Dec 22 12:17:26.719: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433234636s
Dec 22 12:17:28.735: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449686313s
Dec 22 12:17:30.747: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.461304838s
Dec 22 12:17:32.756: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.470712941s
STEP: Saw pod success
Dec 22 12:17:32.756: INFO: Pod "pod-secrets-006813e1-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:17:32.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-006813e1-24b5-11ea-b023-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 22 12:17:32.916: INFO: Waiting for pod pod-secrets-006813e1-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:17:32.925: INFO: Pod pod-secrets-006813e1-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:17:32.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6zhsv" for this suite.
Dec 22 12:17:39.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:17:39.028: INFO: namespace: e2e-tests-secrets-6zhsv, resource: bindings, ignored listing per whitelist
Dec 22 12:17:39.187: INFO: namespace e2e-tests-secrets-6zhsv deletion completed in 6.255641248s
STEP: Destroying namespace "e2e-tests-secret-namespace-ll9s8" for this suite.
Dec 22 12:17:47.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:17:47.668: INFO: namespace: e2e-tests-secret-namespace-ll9s8, resource: bindings, ignored listing per whitelist
Dec 22 12:17:47.729: INFO: namespace e2e-tests-secret-namespace-ll9s8 deletion completed in 8.542155707s

• [SLOW TEST:27.811 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:17:47.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:17:47.951: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 22 12:17:47.960: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-28dsc/daemonsets","resourceVersion":"15677436"},"items":null}

Dec 22 12:17:47.965: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-28dsc/pods","resourceVersion":"15677436"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:17:47.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-28dsc" for this suite.
Dec 22 12:17:54.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:17:54.181: INFO: namespace: e2e-tests-daemonsets-28dsc, resource: bindings, ignored listing per whitelist
Dec 22 12:17:54.208: INFO: namespace e2e-tests-daemonsets-28dsc deletion completed in 6.220934504s

S [SKIPPING] [6.479 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 22 12:17:47.951: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:17:54.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 22 12:17:54.679: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:17:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sfcc4" for this suite.
Dec 22 12:18:00.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:18:01.150: INFO: namespace: e2e-tests-kubectl-sfcc4, resource: bindings, ignored listing per whitelist
Dec 22 12:18:01.196: INFO: namespace e2e-tests-kubectl-sfcc4 deletion completed in 6.351115464s

• [SLOW TEST:6.987 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:18:01.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:18:01.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-g5lcc" to be "success or failure"
Dec 22 12:18:01.451: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.338457ms
Dec 22 12:18:03.618: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182943633s
Dec 22 12:18:05.633: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197752755s
Dec 22 12:18:08.442: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.006544834s
Dec 22 12:18:10.464: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029362483s
Dec 22 12:18:12.497: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.062449185s
STEP: Saw pod success
Dec 22 12:18:12.498: INFO: Pod "downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:18:12.510: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:18:12.897: INFO: Waiting for pod downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:18:12.927: INFO: Pod downwardapi-volume-18ed1454-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:18:12.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g5lcc" for this suite.
Dec 22 12:18:19.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:18:19.122: INFO: namespace: e2e-tests-projected-g5lcc, resource: bindings, ignored listing per whitelist
Dec 22 12:18:19.361: INFO: namespace e2e-tests-projected-g5lcc deletion completed in 6.355258582s

• [SLOW TEST:18.165 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:18:19.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 22 12:18:19.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:22.158: INFO: stderr: ""
Dec 22 12:18:22.158: INFO: stdout: "pod/pause created\n"
Dec 22 12:18:22.158: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 22 12:18:22.158: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-kmbft" to be "running and ready"
Dec 22 12:18:22.225: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 66.950038ms
Dec 22 12:18:24.239: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0812193s
Dec 22 12:18:26.247: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088718715s
Dec 22 12:18:28.263: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104511268s
Dec 22 12:18:30.276: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117302873s
Dec 22 12:18:32.288: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.130241247s
Dec 22 12:18:32.289: INFO: Pod "pause" satisfied condition "running and ready"
Dec 22 12:18:32.289: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 22 12:18:32.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:32.552: INFO: stderr: ""
Dec 22 12:18:32.552: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 22 12:18:32.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:32.758: INFO: stderr: ""
Dec 22 12:18:32.758: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 22 12:18:32.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:32.964: INFO: stderr: ""
Dec 22 12:18:32.965: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 22 12:18:32.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:33.100: INFO: stderr: ""
Dec 22 12:18:33.100: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 22 12:18:33.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:33.267: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 12:18:33.267: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 22 12:18:33.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-kmbft'
Dec 22 12:18:33.421: INFO: stderr: "No resources found.\n"
Dec 22 12:18:33.421: INFO: stdout: ""
Dec 22 12:18:33.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-kmbft -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 12:18:33.534: INFO: stderr: ""
Dec 22 12:18:33.534: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:18:33.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kmbft" for this suite.
Dec 22 12:18:39.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:18:39.655: INFO: namespace: e2e-tests-kubectl-kmbft, resource: bindings, ignored listing per whitelist
Dec 22 12:18:39.852: INFO: namespace e2e-tests-kubectl-kmbft deletion completed in 6.303241985s

• [SLOW TEST:20.490 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:18:39.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 22 12:18:40.074: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:18:58.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-9qj8b" for this suite.
Dec 22 12:19:06.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:19:06.679: INFO: namespace: e2e-tests-init-container-9qj8b, resource: bindings, ignored listing per whitelist
Dec 22 12:19:06.700: INFO: namespace e2e-tests-init-container-9qj8b deletion completed in 8.328980339s

• [SLOW TEST:26.847 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:19:06.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:19:17.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jmntn" for this suite.
Dec 22 12:20:01.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:20:01.398: INFO: namespace: e2e-tests-kubelet-test-jmntn, resource: bindings, ignored listing per whitelist
Dec 22 12:20:01.464: INFO: namespace e2e-tests-kubelet-test-jmntn deletion completed in 44.422774877s

• [SLOW TEST:54.764 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:20:01.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-60aeda4b-24b5-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:20:01.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-gzlts" to be "success or failure"
Dec 22 12:20:01.928: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.339745ms
Dec 22 12:20:03.950: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090497296s
Dec 22 12:20:05.962: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102397547s
Dec 22 12:20:08.004: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145120475s
Dec 22 12:20:10.028: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168504621s
Dec 22 12:20:12.055: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195488195s
STEP: Saw pod success
Dec 22 12:20:12.055: INFO: Pod "pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:20:12.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 22 12:20:12.226: INFO: Waiting for pod pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:20:12.260: INFO: Pod pod-configmaps-60b2d548-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:20:12.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gzlts" for this suite.
Dec 22 12:20:18.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:20:18.440: INFO: namespace: e2e-tests-configmap-gzlts, resource: bindings, ignored listing per whitelist
Dec 22 12:20:18.598: INFO: namespace e2e-tests-configmap-gzlts deletion completed in 6.275120788s

• [SLOW TEST:17.133 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:20:18.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6ae27c77-24b5-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:20:18.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-hppsf" to be "success or failure"
Dec 22 12:20:19.156: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 199.413716ms
Dec 22 12:20:21.172: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215751172s
Dec 22 12:20:23.202: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245188295s
Dec 22 12:20:25.682: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725294698s
Dec 22 12:20:27.697: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.740202854s
Dec 22 12:20:29.715: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757857194s
Dec 22 12:20:31.734: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.777342975s
STEP: Saw pod success
Dec 22 12:20:31.734: INFO: Pod "pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:20:31.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 12:20:31.967: INFO: Waiting for pod pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:20:31.981: INFO: Pod pod-projected-configmaps-6ae5731f-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:20:31.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hppsf" for this suite.
Dec 22 12:20:38.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:20:38.344: INFO: namespace: e2e-tests-projected-hppsf, resource: bindings, ignored listing per whitelist
Dec 22 12:20:38.547: INFO: namespace e2e-tests-projected-hppsf deletion completed in 6.550663873s

• [SLOW TEST:19.949 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:20:38.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-jd9s6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jd9s6 to expose endpoints map[]
Dec 22 12:20:39.451: INFO: Get endpoints failed (80.001459ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 22 12:20:40.480: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jd9s6 exposes endpoints map[] (1.108810214s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-jd9s6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jd9s6 to expose endpoints map[pod1:[80]]
Dec 22 12:20:45.077: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.32915679s elapsed, will retry)
Dec 22 12:20:50.128: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jd9s6 exposes endpoints map[pod1:[80]] (9.379809355s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-jd9s6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jd9s6 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 22 12:20:54.615: INFO: Unexpected endpoints: found map[77c24984-24b5-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.45998026s elapsed, will retry)
Dec 22 12:21:00.832: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jd9s6 exposes endpoints map[pod1:[80] pod2:[80]] (10.677282918s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-jd9s6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jd9s6 to expose endpoints map[pod2:[80]]
Dec 22 12:21:02.026: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jd9s6 exposes endpoints map[pod2:[80]] (1.175022146s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-jd9s6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jd9s6 to expose endpoints map[]
Dec 22 12:21:03.481: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jd9s6 exposes endpoints map[] (1.441086779s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:21:04.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-jd9s6" for this suite.
Dec 22 12:21:28.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:21:28.873: INFO: namespace: e2e-tests-services-jd9s6, resource: bindings, ignored listing per whitelist
Dec 22 12:21:28.941: INFO: namespace e2e-tests-services-jd9s6 deletion completed in 24.69106605s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.392 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:21:28.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 22 12:21:29.165: INFO: Waiting up to 5m0s for pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-lvdct" to be "success or failure"
Dec 22 12:21:29.179: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.418975ms
Dec 22 12:21:31.609: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443545653s
Dec 22 12:21:33.637: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471927286s
Dec 22 12:21:35.711: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5459901s
Dec 22 12:21:37.741: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.575462646s
Dec 22 12:21:39.902: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.737138064s
Dec 22 12:21:41.941: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.775611541s
STEP: Saw pod success
Dec 22 12:21:41.941: INFO: Pod "downward-api-94be5903-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:21:41.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-94be5903-24b5-11ea-b023-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 22 12:21:42.046: INFO: Waiting for pod downward-api-94be5903-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:21:42.126: INFO: Pod downward-api-94be5903-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:21:42.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lvdct" for this suite.
Dec 22 12:21:48.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:21:48.279: INFO: namespace: e2e-tests-downward-api-lvdct, resource: bindings, ignored listing per whitelist
Dec 22 12:21:48.395: INFO: namespace e2e-tests-downward-api-lvdct deletion completed in 6.25064361s

• [SLOW TEST:19.453 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:21:48.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a09319f9-24b5-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:21:49.350: INFO: Waiting up to 5m0s for pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-4lxfm" to be "success or failure"
Dec 22 12:21:49.441: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.301474ms
Dec 22 12:21:51.815: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464814098s
Dec 22 12:21:53.841: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49165789s
Dec 22 12:21:55.979: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629682112s
Dec 22 12:21:57.988: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638598062s
Dec 22 12:22:00.097: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.747207429s
Dec 22 12:22:02.109: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.75948123s
STEP: Saw pod success
Dec 22 12:22:02.109: INFO: Pod "pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:22:02.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 22 12:22:03.047: INFO: Waiting for pod pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:22:03.065: INFO: Pod pod-secrets-a09d6fa2-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:22:03.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4lxfm" for this suite.
Dec 22 12:22:09.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:22:09.650: INFO: namespace: e2e-tests-secrets-4lxfm, resource: bindings, ignored listing per whitelist
Dec 22 12:22:09.741: INFO: namespace e2e-tests-secrets-4lxfm deletion completed in 6.270834244s

• [SLOW TEST:21.345 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:22:09.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:22:09.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 22 12:22:10.041: INFO: stderr: ""
Dec 22 12:22:10.041: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 22 12:22:10.044: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:22:10.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c2lsq" for this suite.
Dec 22 12:22:16.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:22:16.132: INFO: namespace: e2e-tests-kubectl-c2lsq, resource: bindings, ignored listing per whitelist
Dec 22 12:22:16.246: INFO: namespace e2e-tests-kubectl-c2lsq deletion completed in 6.191336446s

S [SKIPPING] [6.505 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 22 12:22:10.044: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:22:16.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-b0ff08fe-24b5-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:22:16.718: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-d5p25" to be "success or failure"
Dec 22 12:22:16.753: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.023289ms
Dec 22 12:22:18.770: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051677026s
Dec 22 12:22:20.800: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082221861s
Dec 22 12:22:22.815: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096590881s
Dec 22 12:22:24.975: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257274594s
Dec 22 12:22:26.989: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270713037s
STEP: Saw pod success
Dec 22 12:22:26.989: INFO: Pod "pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:22:26.993: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 12:22:27.077: INFO: Waiting for pod pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:22:27.088: INFO: Pod pod-projected-configmaps-b1060afd-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:22:27.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d5p25" for this suite.
Dec 22 12:22:33.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:22:33.266: INFO: namespace: e2e-tests-projected-d5p25, resource: bindings, ignored listing per whitelist
Dec 22 12:22:33.341: INFO: namespace e2e-tests-projected-d5p25 deletion completed in 6.247626346s

• [SLOW TEST:17.095 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:22:33.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 22 12:22:57.831: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 12:22:57.865: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 12:22:59.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 12:23:00.142: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 12:23:01.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 12:23:02.332: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 12:23:03.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 12:23:03.884: INFO: Pod pod-with-prestop-http-hook still exists
Dec 22 12:23:05.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 22 12:23:05.884: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:23:05.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-h2x86" for this suite.
Dec 22 12:23:35.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:23:36.069: INFO: namespace: e2e-tests-container-lifecycle-hook-h2x86, resource: bindings, ignored listing per whitelist
Dec 22 12:23:36.175: INFO: namespace e2e-tests-container-lifecycle-hook-h2x86 deletion completed in 30.231914478s

• [SLOW TEST:62.833 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:23:36.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 22 12:23:36.472: INFO: Waiting up to 5m0s for pod "pod-e096db4b-24b5-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-8p2bs" to be "success or failure"
Dec 22 12:23:36.514: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.753771ms
Dec 22 12:23:38.537: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064449448s
Dec 22 12:23:40.576: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103900437s
Dec 22 12:23:42.645: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172739127s
Dec 22 12:23:44.700: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228064538s
Dec 22 12:23:46.716: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.243572836s
Dec 22 12:23:48.745: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.27300323s
STEP: Saw pod success
Dec 22 12:23:48.745: INFO: Pod "pod-e096db4b-24b5-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:23:48.754: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e096db4b-24b5-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:23:48.974: INFO: Waiting for pod pod-e096db4b-24b5-11ea-b023-0242ac110005 to disappear
Dec 22 12:23:48.988: INFO: Pod pod-e096db4b-24b5-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:23:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8p2bs" for this suite.
Dec 22 12:23:55.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:23:55.213: INFO: namespace: e2e-tests-emptydir-8p2bs, resource: bindings, ignored listing per whitelist
Dec 22 12:23:55.240: INFO: namespace e2e-tests-emptydir-8p2bs deletion completed in 6.247016731s

• [SLOW TEST:19.064 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:23:55.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1222 12:24:12.458572       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 12:24:12.458: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:24:12.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zln6w" for this suite.
Dec 22 12:24:42.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:24:42.815: INFO: namespace: e2e-tests-gc-zln6w, resource: bindings, ignored listing per whitelist
Dec 22 12:24:42.925: INFO: namespace e2e-tests-gc-zln6w deletion completed in 30.462899932s

• [SLOW TEST:47.684 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:24:42.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-085b6d9a-24b6-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:24:43.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-vrn25" to be "success or failure"
Dec 22 12:24:43.228: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.817567ms
Dec 22 12:24:45.244: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029640557s
Dec 22 12:24:47.257: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042786603s
Dec 22 12:24:49.521: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306873723s
Dec 22 12:24:51.538: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323509223s
Dec 22 12:24:53.569: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354271789s
STEP: Saw pod success
Dec 22 12:24:53.569: INFO: Pod "pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:24:53.595: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 12:24:53.787: INFO: Waiting for pod pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005 to disappear
Dec 22 12:24:53.807: INFO: Pod pod-projected-secrets-085cbb59-24b6-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:24:53.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vrn25" for this suite.
Dec 22 12:24:59.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:25:00.113: INFO: namespace: e2e-tests-projected-vrn25, resource: bindings, ignored listing per whitelist
Dec 22 12:25:00.157: INFO: namespace e2e-tests-projected-vrn25 deletion completed in 6.327499831s

• [SLOW TEST:17.232 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:25:00.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gk8hc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 22 12:25:00.711: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 22 12:25:35.025: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.4:8080/dial?request=hostName&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gk8hc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 22 12:25:35.025: INFO: >>> kubeConfig: /root/.kube/config
Dec 22 12:25:35.495: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:25:35.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-gk8hc" for this suite.
Dec 22 12:26:01.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:26:01.851: INFO: namespace: e2e-tests-pod-network-test-gk8hc, resource: bindings, ignored listing per whitelist
Dec 22 12:26:01.883: INFO: namespace e2e-tests-pod-network-test-gk8hc deletion completed in 26.367237456s

• [SLOW TEST:61.726 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:26:01.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 22 12:26:09.448: INFO: 10 pods remaining
Dec 22 12:26:09.449: INFO: 10 pods has nil DeletionTimestamp
Dec 22 12:26:09.449: INFO: 
Dec 22 12:26:11.256: INFO: 10 pods remaining
Dec 22 12:26:11.256: INFO: 4 pods has nil DeletionTimestamp
Dec 22 12:26:11.256: INFO: 
Dec 22 12:26:12.692: INFO: 4 pods remaining
Dec 22 12:26:12.692: INFO: 0 pods has nil DeletionTimestamp
Dec 22 12:26:12.692: INFO: 
Dec 22 12:26:13.276: INFO: 0 pods remaining
Dec 22 12:26:13.276: INFO: 0 pods has nil DeletionTimestamp
Dec 22 12:26:13.276: INFO: 
STEP: Gathering metrics
W1222 12:26:14.189004       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 22 12:26:14.189: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:26:14.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nr9ff" for this suite.
Dec 22 12:26:28.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:26:28.475: INFO: namespace: e2e-tests-gc-nr9ff, resource: bindings, ignored listing per whitelist
Dec 22 12:26:28.479: INFO: namespace e2e-tests-gc-nr9ff deletion completed in 14.266847235s

• [SLOW TEST:26.595 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:26:28.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 22 12:26:29.983: INFO: Waiting up to 5m0s for pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005" in namespace "e2e-tests-containers-jt2wj" to be "success or failure"
Dec 22 12:26:30.033: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.115362ms
Dec 22 12:26:32.062: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078557688s
Dec 22 12:26:34.079: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095632519s
Dec 22 12:26:38.132: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148444039s
Dec 22 12:26:40.158: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.174868874s
Dec 22 12:26:42.172: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.188737721s
Dec 22 12:26:44.196: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.212848314s
STEP: Saw pod success
Dec 22 12:26:44.196: INFO: Pod "client-containers-476a2860-24b6-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:26:44.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-476a2860-24b6-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:26:44.812: INFO: Waiting for pod client-containers-476a2860-24b6-11ea-b023-0242ac110005 to disappear
Dec 22 12:26:44.827: INFO: Pod client-containers-476a2860-24b6-11ea-b023-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:26:44.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-jt2wj" for this suite.
Dec 22 12:26:50.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:26:50.960: INFO: namespace: e2e-tests-containers-jt2wj, resource: bindings, ignored listing per whitelist
Dec 22 12:26:51.065: INFO: namespace e2e-tests-containers-jt2wj deletion completed in 6.227792993s

• [SLOW TEST:22.586 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:26:51.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 22 12:26:51.331: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:27:14.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fv5jl" for this suite.
Dec 22 12:27:38.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:27:38.843: INFO: namespace: e2e-tests-init-container-fv5jl, resource: bindings, ignored listing per whitelist
Dec 22 12:27:38.855: INFO: namespace e2e-tests-init-container-fv5jl deletion completed in 24.164751182s

• [SLOW TEST:47.789 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:27:38.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 22 12:27:39.169: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 12:27:39.179: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 12:27:39.183: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 22 12:27:39.209: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 22 12:27:39.209: INFO: 	Container weave ready: true, restart count 0
Dec 22 12:27:39.209: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 12:27:39.209: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:27:39.210: INFO: 	Container coredns ready: true, restart count 0
Dec 22 12:27:39.210: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:27:39.210: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:27:39.210: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:27:39.210: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:27:39.210: INFO: 	Container coredns ready: true, restart count 0
Dec 22 12:27:39.210: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 22 12:27:39.210: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 12:27:39.210: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7765901d-24b6-11ea-b023-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-7765901d-24b6-11ea-b023-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7765901d-24b6-11ea-b023-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:28:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-9tg79" for this suite.
Dec 22 12:28:17.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:28:17.932: INFO: namespace: e2e-tests-sched-pred-9tg79, resource: bindings, ignored listing per whitelist
Dec 22 12:28:17.998: INFO: namespace e2e-tests-sched-pred-9tg79 deletion completed in 16.297666366s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:39.142 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:28:17.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 12:28:18.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lpwfx'
Dec 22 12:28:18.409: INFO: stderr: ""
Dec 22 12:28:18.410: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 22 12:28:28.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lpwfx -o json'
Dec 22 12:28:30.421: INFO: stderr: ""
Dec 22 12:28:30.421: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-22T12:28:18Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-lpwfx\",\n        \"resourceVersion\": \"15678959\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-lpwfx/pods/e2e-test-nginx-pod\",\n        \"uid\": \"88a8fcdc-24b6-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-lgtg9\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-lgtg9\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-lgtg9\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T12:28:18Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T12:28:27Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T12:28:27Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-22T12:28:18Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://ebf3942480c6e82af98f361a0c480a593c514f32144ec99224a6f7704fd92efb\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-22T12:28:26Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-22T12:28:18Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 22 12:28:30.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-lpwfx'
Dec 22 12:28:30.815: INFO: stderr: ""
Dec 22 12:28:30.816: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 22 12:28:30.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lpwfx'
Dec 22 12:28:40.425: INFO: stderr: ""
Dec 22 12:28:40.425: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:28:40.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lpwfx" for this suite.
Dec 22 12:28:46.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:28:46.944: INFO: namespace: e2e-tests-kubectl-lpwfx, resource: bindings, ignored listing per whitelist
Dec 22 12:28:46.952: INFO: namespace e2e-tests-kubectl-lpwfx deletion completed in 6.508979001s

• [SLOW TEST:28.953 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:28:46.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 22 12:28:47.148: INFO: PodSpec: initContainers in spec.initContainers
Dec 22 12:30:06.238: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-99d11144-24b6-11ea-b023-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-ggwtm", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-ggwtm/pods/pod-init-99d11144-24b6-11ea-b023-0242ac110005", UID:"99d3735c-24b6-11ea-a994-fa163e34d433", ResourceVersion:"15679128", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712614527, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"148495031", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dmsmb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026206c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmsmb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmsmb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmsmb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002817438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0017da120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028177b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028177d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0028177d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0028177dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712614527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712614527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712614527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712614527, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00286e1a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026a05b0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026a0620)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://3eadd0bf7c6752aea65b387f3082a46c89324d7c9d78e9eb6f5faf35f60bef9a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00286e1e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00286e1c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:30:06.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-ggwtm" for this suite.
Dec 22 12:30:30.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:30:30.369: INFO: namespace: e2e-tests-init-container-ggwtm, resource: bindings, ignored listing per whitelist
Dec 22 12:30:30.439: INFO: namespace e2e-tests-init-container-ggwtm deletion completed in 24.159518382s

• [SLOW TEST:103.487 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:30:30.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 22 12:30:41.356: INFO: Successfully updated pod "annotationupdated7842dda-24b6-11ea-b023-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:30:43.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t7rqf" for this suite.
Dec 22 12:31:07.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:31:07.750: INFO: namespace: e2e-tests-downward-api-t7rqf, resource: bindings, ignored listing per whitelist
Dec 22 12:31:07.774: INFO: namespace e2e-tests-downward-api-t7rqf deletion completed in 24.267504507s

• [SLOW TEST:37.334 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:31:07.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 22 12:31:07.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 12:31:07.962: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 12:31:07.965: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 22 12:31:07.984: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 22 12:31:07.984: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 12:31:07.984: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:31:07.984: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 22 12:31:07.984: INFO: 	Container weave ready: true, restart count 0
Dec 22 12:31:07.984: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 12:31:07.984: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:31:07.984: INFO: 	Container coredns ready: true, restart count 0
Dec 22 12:31:07.984: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:31:07.984: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:31:07.984: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:31:07.984: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:31:07.984: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e2b1654805367c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:31:09.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-28s66" for this suite.
Dec 22 12:31:15.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:31:15.282: INFO: namespace: e2e-tests-sched-pred-28s66, resource: bindings, ignored listing per whitelist
Dec 22 12:31:15.609: INFO: namespace e2e-tests-sched-pred-28s66 deletion completed in 6.502968919s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.835 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:31:15.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 22 12:31:28.003: INFO: Pod pod-hostip-f27fd1be-24b6-11ea-b023-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:31:28.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jzvgr" for this suite.
Dec 22 12:31:52.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:31:52.202: INFO: namespace: e2e-tests-pods-jzvgr, resource: bindings, ignored listing per whitelist
Dec 22 12:31:52.229: INFO: namespace e2e-tests-pods-jzvgr deletion completed in 24.219711387s

• [SLOW TEST:36.618 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:31:52.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-083e9d1d-24b7-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:31:52.448: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-cz864" to be "success or failure"
Dec 22 12:31:52.478: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.10284ms
Dec 22 12:31:54.502: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053731421s
Dec 22 12:31:56.521: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073039689s
Dec 22 12:31:58.556: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107855073s
Dec 22 12:32:00.864: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415642584s
Dec 22 12:32:02.891: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443132977s
STEP: Saw pod success
Dec 22 12:32:02.891: INFO: Pod "pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:32:02.900: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 12:32:03.001: INFO: Waiting for pod pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:32:03.081: INFO: Pod pod-projected-secrets-083fe1c6-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:32:03.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cz864" for this suite.
Dec 22 12:32:09.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:32:09.385: INFO: namespace: e2e-tests-projected-cz864, resource: bindings, ignored listing per whitelist
Dec 22 12:32:09.435: INFO: namespace e2e-tests-projected-cz864 deletion completed in 6.279919897s

• [SLOW TEST:17.206 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:32:09.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 22 12:32:09.625: INFO: Waiting up to 5m0s for pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-containers-lsnfl" to be "success or failure"
Dec 22 12:32:09.704: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.056992ms
Dec 22 12:32:12.190: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565097239s
Dec 22 12:32:14.196: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570355123s
Dec 22 12:32:16.222: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596392777s
Dec 22 12:32:18.242: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616278882s
Dec 22 12:32:20.378: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.752670067s
Dec 22 12:32:22.398: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.772903701s
STEP: Saw pod success
Dec 22 12:32:22.398: INFO: Pod "client-containers-127eef54-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:32:22.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-127eef54-24b7-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:32:23.511: INFO: Waiting for pod client-containers-127eef54-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:32:24.164: INFO: Pod client-containers-127eef54-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:32:24.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-lsnfl" for this suite.
Dec 22 12:32:30.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:32:30.947: INFO: namespace: e2e-tests-containers-lsnfl, resource: bindings, ignored listing per whitelist
Dec 22 12:32:31.104: INFO: namespace e2e-tests-containers-lsnfl deletion completed in 6.923776246s

• [SLOW TEST:21.669 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:32:31.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1f63ace4-24b7-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 12:32:31.322: INFO: Waiting up to 5m0s for pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-rw2wq" to be "success or failure"
Dec 22 12:32:31.338: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.097125ms
Dec 22 12:32:33.387: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064622046s
Dec 22 12:32:35.416: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093645074s
Dec 22 12:32:37.586: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263419487s
Dec 22 12:32:39.678: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355789433s
Dec 22 12:32:41.690: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.367952084s
Dec 22 12:32:43.722: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.399657144s
STEP: Saw pod success
Dec 22 12:32:43.722: INFO: Pod "pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:32:43.754: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 22 12:32:44.454: INFO: Waiting for pod pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:32:44.481: INFO: Pod pod-secrets-1f6575c1-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:32:44.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rw2wq" for this suite.
Dec 22 12:32:52.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:32:52.723: INFO: namespace: e2e-tests-secrets-rw2wq, resource: bindings, ignored listing per whitelist
Dec 22 12:32:52.747: INFO: namespace e2e-tests-secrets-rw2wq deletion completed in 8.252990195s

• [SLOW TEST:21.643 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:32:52.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:33:05.394: INFO: Waiting up to 5m0s for pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-pods-dxk7p" to be "success or failure"
Dec 22 12:33:05.566: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 172.466226ms
Dec 22 12:33:07.584: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190258345s
Dec 22 12:33:09.602: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207753967s
Dec 22 12:33:11.929: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.534573409s
Dec 22 12:33:13.946: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552030823s
Dec 22 12:33:15.981: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.587306132s
STEP: Saw pod success
Dec 22 12:33:15.981: INFO: Pod "client-envvars-33b921d1-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:33:15.986: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-33b921d1-24b7-11ea-b023-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 22 12:33:16.677: INFO: Waiting for pod client-envvars-33b921d1-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:33:17.074: INFO: Pod client-envvars-33b921d1-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:33:17.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dxk7p" for this suite.
Dec 22 12:34:03.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:34:03.407: INFO: namespace: e2e-tests-pods-dxk7p, resource: bindings, ignored listing per whitelist
Dec 22 12:34:03.414: INFO: namespace e2e-tests-pods-dxk7p deletion completed in 46.324586075s

• [SLOW TEST:70.667 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:34:03.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 22 12:34:03.649: INFO: Waiting up to 5m0s for pod "pod-5674b305-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-vfglm" to be "success or failure"
Dec 22 12:34:03.656: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.533798ms
Dec 22 12:34:05.936: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287188583s
Dec 22 12:34:07.969: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320216313s
Dec 22 12:34:09.984: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335328251s
Dec 22 12:34:12.005: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356528917s
Dec 22 12:34:14.017: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36831885s
Dec 22 12:34:16.285: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.635939582s
STEP: Saw pod success
Dec 22 12:34:16.285: INFO: Pod "pod-5674b305-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:34:16.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5674b305-24b7-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:34:16.831: INFO: Waiting for pod pod-5674b305-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:34:16.838: INFO: Pod pod-5674b305-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:34:16.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vfglm" for this suite.
Dec 22 12:34:22.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:34:22.944: INFO: namespace: e2e-tests-emptydir-vfglm, resource: bindings, ignored listing per whitelist
Dec 22 12:34:23.028: INFO: namespace e2e-tests-emptydir-vfglm deletion completed in 6.183704333s

• [SLOW TEST:19.612 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:34:23.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-622aa0c7-24b7-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:34:23.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-9fz8g" to be "success or failure"
Dec 22 12:34:23.423: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.860593ms
Dec 22 12:34:26.069: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.723576226s
Dec 22 12:34:28.086: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.740501991s
Dec 22 12:34:30.164: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819046032s
Dec 22 12:34:32.252: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.907107376s
Dec 22 12:34:34.269: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.924010195s
STEP: Saw pod success
Dec 22 12:34:34.269: INFO: Pod "pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:34:34.289: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 22 12:34:34.583: INFO: Waiting for pod pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:34:34.654: INFO: Pod pod-configmaps-622b78b6-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:34:34.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9fz8g" for this suite.
Dec 22 12:34:40.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:34:40.762: INFO: namespace: e2e-tests-configmap-9fz8g, resource: bindings, ignored listing per whitelist
Dec 22 12:34:40.948: INFO: namespace e2e-tests-configmap-9fz8g deletion completed in 6.269018788s

• [SLOW TEST:17.920 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:34:40.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 12:34:41.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2v2qr'
Dec 22 12:34:41.294: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 12:34:41.294: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 22 12:34:43.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2v2qr'
Dec 22 12:34:44.357: INFO: stderr: ""
Dec 22 12:34:44.357: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:34:44.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2v2qr" for this suite.
Dec 22 12:34:50.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:34:50.657: INFO: namespace: e2e-tests-kubectl-2v2qr, resource: bindings, ignored listing per whitelist
Dec 22 12:34:50.692: INFO: namespace e2e-tests-kubectl-2v2qr deletion completed in 6.325427335s

• [SLOW TEST:9.743 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:34:50.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:34:50.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-725hv" to be "success or failure"
Dec 22 12:34:50.946: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.820345ms
Dec 22 12:34:53.387: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45348416s
Dec 22 12:34:55.416: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481842117s
Dec 22 12:34:57.971: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.037286313s
Dec 22 12:34:59.986: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.051799333s
Dec 22 12:35:02.016: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.082285806s
STEP: Saw pod success
Dec 22 12:35:02.016: INFO: Pod "downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:35:02.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:35:03.300: INFO: Waiting for pod downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:35:03.336: INFO: Pod downwardapi-volume-72a41e60-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:35:03.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-725hv" for this suite.
Dec 22 12:35:09.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:35:09.670: INFO: namespace: e2e-tests-downward-api-725hv, resource: bindings, ignored listing per whitelist
Dec 22 12:35:09.739: INFO: namespace e2e-tests-downward-api-725hv deletion completed in 6.384636138s

• [SLOW TEST:19.047 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:35:09.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 22 12:35:10.182: INFO: Number of nodes with available pods: 0
Dec 22 12:35:10.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:11.212: INFO: Number of nodes with available pods: 0
Dec 22 12:35:11.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:12.364: INFO: Number of nodes with available pods: 0
Dec 22 12:35:12.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:13.378: INFO: Number of nodes with available pods: 0
Dec 22 12:35:13.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:14.222: INFO: Number of nodes with available pods: 0
Dec 22 12:35:14.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:15.204: INFO: Number of nodes with available pods: 0
Dec 22 12:35:15.204: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:17.515: INFO: Number of nodes with available pods: 0
Dec 22 12:35:17.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:18.737: INFO: Number of nodes with available pods: 0
Dec 22 12:35:18.737: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:19.227: INFO: Number of nodes with available pods: 0
Dec 22 12:35:19.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:20.207: INFO: Number of nodes with available pods: 0
Dec 22 12:35:20.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:21.207: INFO: Number of nodes with available pods: 1
Dec 22 12:35:21.207: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 22 12:35:21.302: INFO: Number of nodes with available pods: 0
Dec 22 12:35:21.302: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:22.331: INFO: Number of nodes with available pods: 0
Dec 22 12:35:22.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:23.352: INFO: Number of nodes with available pods: 0
Dec 22 12:35:23.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:24.544: INFO: Number of nodes with available pods: 0
Dec 22 12:35:24.544: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:25.321: INFO: Number of nodes with available pods: 0
Dec 22 12:35:25.321: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:26.550: INFO: Number of nodes with available pods: 0
Dec 22 12:35:26.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:27.326: INFO: Number of nodes with available pods: 0
Dec 22 12:35:27.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:28.325: INFO: Number of nodes with available pods: 0
Dec 22 12:35:28.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:29.406: INFO: Number of nodes with available pods: 0
Dec 22 12:35:29.406: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:30.686: INFO: Number of nodes with available pods: 0
Dec 22 12:35:30.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:31.325: INFO: Number of nodes with available pods: 0
Dec 22 12:35:31.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:32.319: INFO: Number of nodes with available pods: 0
Dec 22 12:35:32.319: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:33.334: INFO: Number of nodes with available pods: 0
Dec 22 12:35:33.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:34.936: INFO: Number of nodes with available pods: 0
Dec 22 12:35:34.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:35.677: INFO: Number of nodes with available pods: 0
Dec 22 12:35:35.677: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:36.615: INFO: Number of nodes with available pods: 0
Dec 22 12:35:36.616: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:37.331: INFO: Number of nodes with available pods: 0
Dec 22 12:35:37.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:38.326: INFO: Number of nodes with available pods: 0
Dec 22 12:35:38.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 12:35:39.331: INFO: Number of nodes with available pods: 1
Dec 22 12:35:39.331: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6lphm, will wait for the garbage collector to delete the pods
Dec 22 12:35:39.415: INFO: Deleting DaemonSet.extensions daemon-set took: 14.778992ms
Dec 22 12:35:39.616: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.959811ms
Dec 22 12:35:47.257: INFO: Number of nodes with available pods: 0
Dec 22 12:35:47.257: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 12:35:47.267: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6lphm/daemonsets","resourceVersion":"15679874"},"items":null}

Dec 22 12:35:47.272: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6lphm/pods","resourceVersion":"15679874"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:35:47.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6lphm" for this suite.
Dec 22 12:35:55.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:35:55.466: INFO: namespace: e2e-tests-daemonsets-6lphm, resource: bindings, ignored listing per whitelist
Dec 22 12:35:55.490: INFO: namespace e2e-tests-daemonsets-6lphm deletion completed in 8.198193627s

• [SLOW TEST:45.750 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:35:55.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q55l5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q55l5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 12.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.12_udp@PTR;check="$$(dig +tcp +noall +answer +search 12.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.12_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q55l5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q55l5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-q55l5.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-q55l5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-q55l5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 12.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.12_udp@PTR;check="$$(dig +tcp +noall +answer +search 12.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.12_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 22 12:36:11.984: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:11.988: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:11.994: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-q55l5 from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.001: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5 from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.008: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.016: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.023: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.030: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.036: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.041: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.046: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.050: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.059: INFO: Unable to read 10.106.85.12_udp@PTR from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.063: INFO: Unable to read 10.106.85.12_tcp@PTR from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.067: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.071: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.075: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q55l5 from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.079: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q55l5 from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.081: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.084: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.091: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.094: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.097: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.100: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.104: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.107: INFO: Unable to read 10.106.85.12_udp@PTR from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.114: INFO: Unable to read 10.106.85.12_tcp@PTR from pod e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005: the server could not find the requested resource (get pods dns-test-995b9c9c-24b7-11ea-b023-0242ac110005)
Dec 22 12:36:12.114: INFO: Lookups using e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-q55l5 wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5 wheezy_udp@dns-test-service.e2e-tests-dns-q55l5.svc wheezy_tcp@dns-test-service.e2e-tests-dns-q55l5.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.85.12_udp@PTR 10.106.85.12_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-q55l5 jessie_tcp@dns-test-service.e2e-tests-dns-q55l5 jessie_udp@dns-test-service.e2e-tests-dns-q55l5.svc jessie_tcp@dns-test-service.e2e-tests-dns-q55l5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-q55l5.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-q55l5.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.85.12_udp@PTR 10.106.85.12_tcp@PTR]

Dec 22 12:36:17.294: INFO: DNS probes using e2e-tests-dns-q55l5/dns-test-995b9c9c-24b7-11ea-b023-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:36:17.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-q55l5" for this suite.
Dec 22 12:36:25.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:36:25.942: INFO: namespace: e2e-tests-dns-q55l5, resource: bindings, ignored listing per whitelist
Dec 22 12:36:25.976: INFO: namespace e2e-tests-dns-q55l5 deletion completed in 8.24173763s

• [SLOW TEST:30.486 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:36:25.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 22 12:36:26.255: INFO: Waiting up to 5m0s for pod "pod-ab69339f-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-fqhp5" to be "success or failure"
Dec 22 12:36:26.267: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.682686ms
Dec 22 12:36:28.283: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027897918s
Dec 22 12:36:30.304: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049002322s
Dec 22 12:36:34.460: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204784232s
Dec 22 12:36:36.522: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267437274s
Dec 22 12:36:38.546: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.291318748s
Dec 22 12:36:40.586: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.331016269s
STEP: Saw pod success
Dec 22 12:36:40.586: INFO: Pod "pod-ab69339f-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:36:40.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ab69339f-24b7-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:36:40.739: INFO: Waiting for pod pod-ab69339f-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:36:40.833: INFO: Pod pod-ab69339f-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:36:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fqhp5" for this suite.
Dec 22 12:36:46.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:36:46.969: INFO: namespace: e2e-tests-emptydir-fqhp5, resource: bindings, ignored listing per whitelist
Dec 22 12:36:47.006: INFO: namespace e2e-tests-emptydir-fqhp5 deletion completed in 6.161223822s

• [SLOW TEST:21.029 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:36:47.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:36:47.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-6dhbb" to be "success or failure"
Dec 22 12:36:47.259: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.61301ms
Dec 22 12:36:49.425: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23652748s
Dec 22 12:36:51.439: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250756108s
Dec 22 12:36:53.584: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39511233s
Dec 22 12:36:55.596: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.407463507s
Dec 22 12:36:57.617: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.428258313s
Dec 22 12:36:59.637: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.44839456s
STEP: Saw pod success
Dec 22 12:36:59.637: INFO: Pod "downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:36:59.648: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:36:59.723: INFO: Waiting for pod downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005 to disappear
Dec 22 12:36:59.861: INFO: Pod downwardapi-volume-b7ef477d-24b7-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:36:59.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6dhbb" for this suite.
Dec 22 12:37:05.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:37:06.135: INFO: namespace: e2e-tests-downward-api-6dhbb, resource: bindings, ignored listing per whitelist
Dec 22 12:37:06.229: INFO: namespace e2e-tests-downward-api-6dhbb deletion completed in 6.325044352s

• [SLOW TEST:19.223 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:37:06.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:37:06.681: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.564431ms)
Dec 22 12:37:06.687: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.580588ms)
Dec 22 12:37:06.692: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.208142ms)
Dec 22 12:37:06.698: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.871167ms)
Dec 22 12:37:06.704: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.37196ms)
Dec 22 12:37:06.711: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.395466ms)
Dec 22 12:37:06.877: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 166.183982ms)
Dec 22 12:37:06.892: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.309034ms)
Dec 22 12:37:06.908: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.55546ms)
Dec 22 12:37:06.918: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.668351ms)
Dec 22 12:37:06.929: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.471483ms)
Dec 22 12:37:06.939: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.834151ms)
Dec 22 12:37:06.947: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.732001ms)
Dec 22 12:37:06.956: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.719464ms)
Dec 22 12:37:06.963: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.106153ms)
Dec 22 12:37:06.970: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.48167ms)
Dec 22 12:37:06.977: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.416387ms)
Dec 22 12:37:06.983: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.153432ms)
Dec 22 12:37:06.988: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.875985ms)
Dec 22 12:37:06.996: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.212483ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:37:06.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-b7jg2" for this suite.
Dec 22 12:37:15.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:37:15.219: INFO: namespace: e2e-tests-proxy-b7jg2, resource: bindings, ignored listing per whitelist
Dec 22 12:37:15.242: INFO: namespace e2e-tests-proxy-b7jg2 deletion completed in 8.238385725s

• [SLOW TEST:9.013 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:37:15.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-m2psh
I1222 12:37:15.450028       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-m2psh, replica count: 1
I1222 12:37:16.501566       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:17.502410       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:18.503293       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:19.504337       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:20.505617       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:21.506154       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:22.506871       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:23.507549       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:24.508436       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 12:37:25.508931       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 22 12:37:25.704: INFO: Created: latency-svc-8r585
Dec 22 12:37:25.905: INFO: Got endpoints: latency-svc-8r585 [295.914621ms]
Dec 22 12:37:26.165: INFO: Created: latency-svc-nj8w4
Dec 22 12:37:26.477: INFO: Got endpoints: latency-svc-nj8w4 [570.028188ms]
Dec 22 12:37:26.480: INFO: Created: latency-svc-brwmr
Dec 22 12:37:26.551: INFO: Got endpoints: latency-svc-brwmr [644.598317ms]
Dec 22 12:37:26.852: INFO: Created: latency-svc-xhmdn
Dec 22 12:37:26.853: INFO: Got endpoints: latency-svc-xhmdn [945.866034ms]
Dec 22 12:37:27.014: INFO: Created: latency-svc-ktrtc
Dec 22 12:37:27.028: INFO: Got endpoints: latency-svc-ktrtc [1.122048282s]
Dec 22 12:37:27.103: INFO: Created: latency-svc-hzjj6
Dec 22 12:37:27.276: INFO: Got endpoints: latency-svc-hzjj6 [1.368900597s]
Dec 22 12:37:27.307: INFO: Created: latency-svc-ltwps
Dec 22 12:37:27.339: INFO: Got endpoints: latency-svc-ltwps [1.432642623s]
Dec 22 12:37:27.471: INFO: Created: latency-svc-js5wp
Dec 22 12:37:27.495: INFO: Got endpoints: latency-svc-js5wp [1.588816902s]
Dec 22 12:37:27.560: INFO: Created: latency-svc-hk5fd
Dec 22 12:37:27.671: INFO: Got endpoints: latency-svc-hk5fd [1.764076352s]
Dec 22 12:37:27.716: INFO: Created: latency-svc-bhfm6
Dec 22 12:37:27.742: INFO: Got endpoints: latency-svc-bhfm6 [1.835920737s]
Dec 22 12:37:27.979: INFO: Created: latency-svc-t9kwf
Dec 22 12:37:28.002: INFO: Got endpoints: latency-svc-t9kwf [2.0955136s]
Dec 22 12:37:28.194: INFO: Created: latency-svc-xr262
Dec 22 12:37:28.203: INFO: Got endpoints: latency-svc-xr262 [2.296341129s]
Dec 22 12:37:28.403: INFO: Created: latency-svc-chdrn
Dec 22 12:37:28.464: INFO: Got endpoints: latency-svc-chdrn [2.55632525s]
Dec 22 12:37:28.722: INFO: Created: latency-svc-n5dtg
Dec 22 12:37:28.911: INFO: Got endpoints: latency-svc-n5dtg [3.003656295s]
Dec 22 12:37:28.978: INFO: Created: latency-svc-44vlq
Dec 22 12:37:29.149: INFO: Got endpoints: latency-svc-44vlq [3.2429462s]
Dec 22 12:37:29.195: INFO: Created: latency-svc-qtn79
Dec 22 12:37:29.201: INFO: Got endpoints: latency-svc-qtn79 [3.295111965s]
Dec 22 12:37:29.246: INFO: Created: latency-svc-6sshl
Dec 22 12:37:29.406: INFO: Got endpoints: latency-svc-6sshl [2.928186425s]
Dec 22 12:37:29.471: INFO: Created: latency-svc-75qsv
Dec 22 12:37:29.619: INFO: Got endpoints: latency-svc-75qsv [3.066629708s]
Dec 22 12:37:29.660: INFO: Created: latency-svc-sgdc5
Dec 22 12:37:29.665: INFO: Got endpoints: latency-svc-sgdc5 [2.811950258s]
Dec 22 12:37:29.734: INFO: Created: latency-svc-p652l
Dec 22 12:37:29.886: INFO: Got endpoints: latency-svc-p652l [2.857624947s]
Dec 22 12:37:29.920: INFO: Created: latency-svc-w6n2l
Dec 22 12:37:29.959: INFO: Got endpoints: latency-svc-w6n2l [2.683102409s]
Dec 22 12:37:30.137: INFO: Created: latency-svc-4vxng
Dec 22 12:37:30.161: INFO: Got endpoints: latency-svc-4vxng [2.822507346s]
Dec 22 12:37:30.355: INFO: Created: latency-svc-kp9z5
Dec 22 12:37:30.407: INFO: Got endpoints: latency-svc-kp9z5 [2.912158702s]
Dec 22 12:37:30.596: INFO: Created: latency-svc-j88c5
Dec 22 12:37:30.781: INFO: Got endpoints: latency-svc-j88c5 [3.109426196s]
Dec 22 12:37:30.820: INFO: Created: latency-svc-xpmk8
Dec 22 12:37:30.842: INFO: Got endpoints: latency-svc-xpmk8 [3.099426627s]
Dec 22 12:37:31.143: INFO: Created: latency-svc-5l7r4
Dec 22 12:37:31.153: INFO: Got endpoints: latency-svc-5l7r4 [371.983475ms]
Dec 22 12:37:31.307: INFO: Created: latency-svc-xggnw
Dec 22 12:37:31.377: INFO: Got endpoints: latency-svc-xggnw [3.374953502s]
Dec 22 12:37:31.383: INFO: Created: latency-svc-75fcn
Dec 22 12:37:31.483: INFO: Got endpoints: latency-svc-75fcn [3.280383487s]
Dec 22 12:37:31.514: INFO: Created: latency-svc-9xsdr
Dec 22 12:37:31.531: INFO: Got endpoints: latency-svc-9xsdr [3.067029559s]
Dec 22 12:37:31.732: INFO: Created: latency-svc-rzss8
Dec 22 12:37:31.754: INFO: Got endpoints: latency-svc-rzss8 [2.842345639s]
Dec 22 12:37:31.957: INFO: Created: latency-svc-bzcpb
Dec 22 12:37:31.962: INFO: Got endpoints: latency-svc-bzcpb [2.812601784s]
Dec 22 12:37:32.038: INFO: Created: latency-svc-vh8f4
Dec 22 12:37:32.111: INFO: Got endpoints: latency-svc-vh8f4 [2.910586231s]
Dec 22 12:37:32.141: INFO: Created: latency-svc-klb9h
Dec 22 12:37:32.193: INFO: Got endpoints: latency-svc-klb9h [2.786474228s]
Dec 22 12:37:32.320: INFO: Created: latency-svc-929x7
Dec 22 12:37:32.337: INFO: Got endpoints: latency-svc-929x7 [2.718115576s]
Dec 22 12:37:32.513: INFO: Created: latency-svc-sqwxx
Dec 22 12:37:32.812: INFO: Got endpoints: latency-svc-sqwxx [3.146739374s]
Dec 22 12:37:32.915: INFO: Created: latency-svc-tb7pw
Dec 22 12:37:33.137: INFO: Got endpoints: latency-svc-tb7pw [3.25077063s]
Dec 22 12:37:33.195: INFO: Created: latency-svc-wtrb6
Dec 22 12:37:33.201: INFO: Got endpoints: latency-svc-wtrb6 [3.241254545s]
Dec 22 12:37:33.333: INFO: Created: latency-svc-td2x8
Dec 22 12:37:33.362: INFO: Got endpoints: latency-svc-td2x8 [3.200718018s]
Dec 22 12:37:33.545: INFO: Created: latency-svc-rcfbk
Dec 22 12:37:33.545: INFO: Got endpoints: latency-svc-rcfbk [3.137585061s]
Dec 22 12:37:33.706: INFO: Created: latency-svc-6dn5c
Dec 22 12:37:33.713: INFO: Got endpoints: latency-svc-6dn5c [2.87043342s]
Dec 22 12:37:33.777: INFO: Created: latency-svc-zh574
Dec 22 12:37:33.923: INFO: Got endpoints: latency-svc-zh574 [2.769749028s]
Dec 22 12:37:33.972: INFO: Created: latency-svc-msjcj
Dec 22 12:37:34.002: INFO: Got endpoints: latency-svc-msjcj [2.624372582s]
Dec 22 12:37:34.445: INFO: Created: latency-svc-tfgdz
Dec 22 12:37:34.557: INFO: Got endpoints: latency-svc-tfgdz [3.073273506s]
Dec 22 12:37:34.614: INFO: Created: latency-svc-m4rqv
Dec 22 12:37:34.652: INFO: Got endpoints: latency-svc-m4rqv [3.121054837s]
Dec 22 12:37:35.112: INFO: Created: latency-svc-wpq2d
Dec 22 12:37:35.129: INFO: Got endpoints: latency-svc-wpq2d [3.375284046s]
Dec 22 12:37:35.328: INFO: Created: latency-svc-gm7pj
Dec 22 12:37:35.334: INFO: Got endpoints: latency-svc-gm7pj [3.37163958s]
Dec 22 12:37:35.618: INFO: Created: latency-svc-hljkz
Dec 22 12:37:35.634: INFO: Got endpoints: latency-svc-hljkz [3.52231771s]
Dec 22 12:37:35.832: INFO: Created: latency-svc-7dkgj
Dec 22 12:37:35.864: INFO: Got endpoints: latency-svc-7dkgj [3.670625912s]
Dec 22 12:37:36.165: INFO: Created: latency-svc-5fvzc
Dec 22 12:37:36.177: INFO: Got endpoints: latency-svc-5fvzc [3.839527078s]
Dec 22 12:37:36.334: INFO: Created: latency-svc-v8q9s
Dec 22 12:37:36.377: INFO: Got endpoints: latency-svc-v8q9s [3.564408118s]
Dec 22 12:37:36.609: INFO: Created: latency-svc-lcqzc
Dec 22 12:37:36.610: INFO: Got endpoints: latency-svc-lcqzc [3.472589591s]
Dec 22 12:37:36.748: INFO: Created: latency-svc-t64nk
Dec 22 12:37:36.783: INFO: Got endpoints: latency-svc-t64nk [3.581476152s]
Dec 22 12:37:36.929: INFO: Created: latency-svc-tk6dc
Dec 22 12:37:36.953: INFO: Got endpoints: latency-svc-tk6dc [3.590699605s]
Dec 22 12:37:37.142: INFO: Created: latency-svc-x7qdj
Dec 22 12:37:37.174: INFO: Got endpoints: latency-svc-x7qdj [3.628547506s]
Dec 22 12:37:37.381: INFO: Created: latency-svc-w955k
Dec 22 12:37:37.422: INFO: Got endpoints: latency-svc-w955k [3.709082797s]
Dec 22 12:37:37.616: INFO: Created: latency-svc-5qb4p
Dec 22 12:37:37.641: INFO: Got endpoints: latency-svc-5qb4p [3.717653487s]
Dec 22 12:37:37.912: INFO: Created: latency-svc-9r8f9
Dec 22 12:37:38.066: INFO: Got endpoints: latency-svc-9r8f9 [4.063812085s]
Dec 22 12:37:38.088: INFO: Created: latency-svc-vnrqs
Dec 22 12:37:38.142: INFO: Got endpoints: latency-svc-vnrqs [3.584806455s]
Dec 22 12:37:38.270: INFO: Created: latency-svc-v46zs
Dec 22 12:37:38.706: INFO: Got endpoints: latency-svc-v46zs [4.053382656s]
Dec 22 12:37:39.231: INFO: Created: latency-svc-2sfwx
Dec 22 12:37:39.378: INFO: Got endpoints: latency-svc-2sfwx [4.248351629s]
Dec 22 12:37:39.623: INFO: Created: latency-svc-pnscq
Dec 22 12:37:39.643: INFO: Got endpoints: latency-svc-pnscq [4.30931732s]
Dec 22 12:37:39.759: INFO: Created: latency-svc-cbjf5
Dec 22 12:37:39.773: INFO: Got endpoints: latency-svc-cbjf5 [4.139333896s]
Dec 22 12:37:39.851: INFO: Created: latency-svc-rkgtf
Dec 22 12:37:39.851: INFO: Got endpoints: latency-svc-rkgtf [3.987159788s]
Dec 22 12:37:39.935: INFO: Created: latency-svc-z2twl
Dec 22 12:37:39.946: INFO: Got endpoints: latency-svc-z2twl [3.769391509s]
Dec 22 12:37:39.977: INFO: Created: latency-svc-rc7xr
Dec 22 12:37:40.003: INFO: Got endpoints: latency-svc-rc7xr [3.625663885s]
Dec 22 12:37:40.083: INFO: Created: latency-svc-dxchq
Dec 22 12:37:40.104: INFO: Got endpoints: latency-svc-dxchq [3.494378906s]
Dec 22 12:37:40.179: INFO: Created: latency-svc-cx689
Dec 22 12:37:40.248: INFO: Got endpoints: latency-svc-cx689 [3.465431997s]
Dec 22 12:37:40.312: INFO: Created: latency-svc-47gwc
Dec 22 12:37:40.318: INFO: Got endpoints: latency-svc-47gwc [3.364239325s]
Dec 22 12:37:40.441: INFO: Created: latency-svc-gnp24
Dec 22 12:37:40.449: INFO: Got endpoints: latency-svc-gnp24 [3.274266829s]
Dec 22 12:37:40.677: INFO: Created: latency-svc-rwgn5
Dec 22 12:37:40.686: INFO: Got endpoints: latency-svc-rwgn5 [3.263607997s]
Dec 22 12:37:40.771: INFO: Created: latency-svc-hgsb9
Dec 22 12:37:40.931: INFO: Got endpoints: latency-svc-hgsb9 [3.289729011s]
Dec 22 12:37:40.933: INFO: Created: latency-svc-644kl
Dec 22 12:37:40.970: INFO: Got endpoints: latency-svc-644kl [2.904029602s]
Dec 22 12:37:41.007: INFO: Created: latency-svc-7pz2z
Dec 22 12:37:41.149: INFO: Got endpoints: latency-svc-7pz2z [3.006934758s]
Dec 22 12:37:41.215: INFO: Created: latency-svc-lg6nr
Dec 22 12:37:41.396: INFO: Got endpoints: latency-svc-lg6nr [2.689772407s]
Dec 22 12:37:41.442: INFO: Created: latency-svc-xqp9g
Dec 22 12:37:41.449: INFO: Got endpoints: latency-svc-xqp9g [2.071589669s]
Dec 22 12:37:41.642: INFO: Created: latency-svc-dkb7c
Dec 22 12:37:41.658: INFO: Got endpoints: latency-svc-dkb7c [2.015110201s]
Dec 22 12:37:41.756: INFO: Created: latency-svc-4tgnb
Dec 22 12:37:41.839: INFO: Got endpoints: latency-svc-4tgnb [2.065408133s]
Dec 22 12:37:41.846: INFO: Created: latency-svc-9f9hg
Dec 22 12:37:42.020: INFO: Got endpoints: latency-svc-9f9hg [2.168185552s]
Dec 22 12:37:42.028: INFO: Created: latency-svc-cnndt
Dec 22 12:37:42.053: INFO: Got endpoints: latency-svc-cnndt [2.106902847s]
Dec 22 12:37:42.230: INFO: Created: latency-svc-fznjw
Dec 22 12:37:42.235: INFO: Got endpoints: latency-svc-fznjw [2.231747788s]
Dec 22 12:37:42.304: INFO: Created: latency-svc-89kfx
Dec 22 12:37:42.423: INFO: Got endpoints: latency-svc-89kfx [2.319002913s]
Dec 22 12:37:42.460: INFO: Created: latency-svc-pmpf9
Dec 22 12:37:42.598: INFO: Got endpoints: latency-svc-pmpf9 [2.348912001s]
Dec 22 12:37:42.665: INFO: Created: latency-svc-tcrp4
Dec 22 12:37:42.811: INFO: Got endpoints: latency-svc-tcrp4 [2.493255294s]
Dec 22 12:37:42.908: INFO: Created: latency-svc-pr6z6
Dec 22 12:37:42.976: INFO: Got endpoints: latency-svc-pr6z6 [2.527065864s]
Dec 22 12:37:43.017: INFO: Created: latency-svc-c924m
Dec 22 12:37:43.179: INFO: Got endpoints: latency-svc-c924m [2.493522837s]
Dec 22 12:37:43.285: INFO: Created: latency-svc-vmc5b
Dec 22 12:37:43.448: INFO: Got endpoints: latency-svc-vmc5b [2.516749795s]
Dec 22 12:37:43.483: INFO: Created: latency-svc-sncbb
Dec 22 12:37:43.520: INFO: Created: latency-svc-wv58s
Dec 22 12:37:43.521: INFO: Got endpoints: latency-svc-sncbb [2.550296004s]
Dec 22 12:37:43.643: INFO: Got endpoints: latency-svc-wv58s [2.493614209s]
Dec 22 12:37:43.687: INFO: Created: latency-svc-c49l4
Dec 22 12:37:43.724: INFO: Got endpoints: latency-svc-c49l4 [2.327798124s]
Dec 22 12:37:43.902: INFO: Created: latency-svc-vck8w
Dec 22 12:37:43.902: INFO: Got endpoints: latency-svc-vck8w [2.452685893s]
Dec 22 12:37:44.104: INFO: Created: latency-svc-d2js8
Dec 22 12:37:44.145: INFO: Got endpoints: latency-svc-d2js8 [2.486218396s]
Dec 22 12:37:44.281: INFO: Created: latency-svc-6qwlf
Dec 22 12:37:44.323: INFO: Got endpoints: latency-svc-6qwlf [2.484147093s]
Dec 22 12:37:44.517: INFO: Created: latency-svc-6x5lf
Dec 22 12:37:44.556: INFO: Got endpoints: latency-svc-6x5lf [2.535901977s]
Dec 22 12:37:44.795: INFO: Created: latency-svc-vxlxx
Dec 22 12:37:44.938: INFO: Got endpoints: latency-svc-vxlxx [2.885107976s]
Dec 22 12:37:44.971: INFO: Created: latency-svc-gclpn
Dec 22 12:37:44.985: INFO: Got endpoints: latency-svc-gclpn [2.74996035s]
Dec 22 12:37:45.201: INFO: Created: latency-svc-zs495
Dec 22 12:37:45.238: INFO: Got endpoints: latency-svc-zs495 [2.814103288s]
Dec 22 12:37:45.390: INFO: Created: latency-svc-95zzf
Dec 22 12:37:45.576: INFO: Got endpoints: latency-svc-95zzf [2.978003715s]
Dec 22 12:37:45.579: INFO: Created: latency-svc-5xzh4
Dec 22 12:37:45.597: INFO: Got endpoints: latency-svc-5xzh4 [2.785019132s]
Dec 22 12:37:45.674: INFO: Created: latency-svc-mttr8
Dec 22 12:37:45.776: INFO: Got endpoints: latency-svc-mttr8 [2.800269625s]
Dec 22 12:37:45.843: INFO: Created: latency-svc-h57st
Dec 22 12:37:45.858: INFO: Got endpoints: latency-svc-h57st [2.678766439s]
Dec 22 12:37:45.966: INFO: Created: latency-svc-26sv7
Dec 22 12:37:45.999: INFO: Got endpoints: latency-svc-26sv7 [2.550848971s]
Dec 22 12:37:46.141: INFO: Created: latency-svc-j2ltz
Dec 22 12:37:46.151: INFO: Got endpoints: latency-svc-j2ltz [2.630496993s]
Dec 22 12:37:46.210: INFO: Created: latency-svc-bzxdq
Dec 22 12:37:46.339: INFO: Got endpoints: latency-svc-bzxdq [2.695526402s]
Dec 22 12:37:46.351: INFO: Created: latency-svc-45xzz
Dec 22 12:37:46.373: INFO: Got endpoints: latency-svc-45xzz [2.648247507s]
Dec 22 12:37:46.526: INFO: Created: latency-svc-9xnzq
Dec 22 12:37:46.539: INFO: Got endpoints: latency-svc-9xnzq [2.637089354s]
Dec 22 12:37:46.684: INFO: Created: latency-svc-w78q6
Dec 22 12:37:46.762: INFO: Got endpoints: latency-svc-w78q6 [2.616517723s]
Dec 22 12:37:46.789: INFO: Created: latency-svc-pl78h
Dec 22 12:37:46.908: INFO: Got endpoints: latency-svc-pl78h [2.584682573s]
Dec 22 12:37:46.981: INFO: Created: latency-svc-nkkvz
Dec 22 12:37:47.083: INFO: Got endpoints: latency-svc-nkkvz [2.527150933s]
Dec 22 12:37:47.099: INFO: Created: latency-svc-5fd8f
Dec 22 12:37:47.272: INFO: Got endpoints: latency-svc-5fd8f [2.332620822s]
Dec 22 12:37:47.280: INFO: Created: latency-svc-lb8b9
Dec 22 12:37:47.433: INFO: Got endpoints: latency-svc-lb8b9 [2.448075125s]
Dec 22 12:37:47.454: INFO: Created: latency-svc-86bdx
Dec 22 12:37:47.468: INFO: Got endpoints: latency-svc-86bdx [2.230506047s]
Dec 22 12:37:47.750: INFO: Created: latency-svc-5v92s
Dec 22 12:37:47.770: INFO: Got endpoints: latency-svc-5v92s [2.193948037s]
Dec 22 12:37:47.941: INFO: Created: latency-svc-j6f95
Dec 22 12:37:47.973: INFO: Got endpoints: latency-svc-j6f95 [2.376086337s]
Dec 22 12:37:48.095: INFO: Created: latency-svc-r7kqz
Dec 22 12:37:48.117: INFO: Got endpoints: latency-svc-r7kqz [2.339897164s]
Dec 22 12:37:48.250: INFO: Created: latency-svc-q8p8w
Dec 22 12:37:48.273: INFO: Got endpoints: latency-svc-q8p8w [2.413905871s]
Dec 22 12:37:48.353: INFO: Created: latency-svc-pgln5
Dec 22 12:37:48.513: INFO: Got endpoints: latency-svc-pgln5 [2.513258948s]
Dec 22 12:37:48.524: INFO: Created: latency-svc-zd6l4
Dec 22 12:37:48.561: INFO: Got endpoints: latency-svc-zd6l4 [2.409115382s]
Dec 22 12:37:48.780: INFO: Created: latency-svc-2np8g
Dec 22 12:37:48.971: INFO: Got endpoints: latency-svc-2np8g [2.63137819s]
Dec 22 12:37:48.975: INFO: Created: latency-svc-6mnqf
Dec 22 12:37:49.002: INFO: Got endpoints: latency-svc-6mnqf [2.629389398s]
Dec 22 12:37:49.242: INFO: Created: latency-svc-8rbjd
Dec 22 12:37:49.265: INFO: Got endpoints: latency-svc-8rbjd [2.725656711s]
Dec 22 12:37:49.419: INFO: Created: latency-svc-fl8lr
Dec 22 12:37:49.463: INFO: Got endpoints: latency-svc-fl8lr [2.700858922s]
Dec 22 12:37:49.578: INFO: Created: latency-svc-bhxgp
Dec 22 12:37:49.605: INFO: Got endpoints: latency-svc-bhxgp [2.696418967s]
Dec 22 12:37:49.886: INFO: Created: latency-svc-jwmwk
Dec 22 12:37:49.886: INFO: Got endpoints: latency-svc-jwmwk [2.802308441s]
Dec 22 12:37:49.996: INFO: Created: latency-svc-n6269
Dec 22 12:37:50.005: INFO: Got endpoints: latency-svc-n6269 [2.732813108s]
Dec 22 12:37:50.057: INFO: Created: latency-svc-bd948
Dec 22 12:37:50.159: INFO: Got endpoints: latency-svc-bd948 [2.725944612s]
Dec 22 12:37:50.206: INFO: Created: latency-svc-kgtsf
Dec 22 12:37:50.222: INFO: Got endpoints: latency-svc-kgtsf [2.753168279s]
Dec 22 12:37:50.339: INFO: Created: latency-svc-hgqws
Dec 22 12:37:50.359: INFO: Got endpoints: latency-svc-hgqws [2.587977169s]
Dec 22 12:37:50.602: INFO: Created: latency-svc-zs9gb
Dec 22 12:37:50.633: INFO: Created: latency-svc-6642j
Dec 22 12:37:50.642: INFO: Got endpoints: latency-svc-zs9gb [2.668182949s]
Dec 22 12:37:50.667: INFO: Got endpoints: latency-svc-6642j [2.549553072s]
Dec 22 12:37:50.910: INFO: Created: latency-svc-8hsn5
Dec 22 12:37:51.038: INFO: Got endpoints: latency-svc-8hsn5 [2.76518927s]
Dec 22 12:37:51.061: INFO: Created: latency-svc-w5bqs
Dec 22 12:37:51.097: INFO: Got endpoints: latency-svc-w5bqs [2.583049508s]
Dec 22 12:37:51.226: INFO: Created: latency-svc-vn64z
Dec 22 12:37:51.226: INFO: Got endpoints: latency-svc-vn64z [2.664793543s]
Dec 22 12:37:51.327: INFO: Created: latency-svc-b4vls
Dec 22 12:37:51.392: INFO: Got endpoints: latency-svc-b4vls [2.420698621s]
Dec 22 12:37:51.453: INFO: Created: latency-svc-pt7jx
Dec 22 12:37:51.484: INFO: Got endpoints: latency-svc-pt7jx [2.480912281s]
Dec 22 12:37:51.648: INFO: Created: latency-svc-9m6kq
Dec 22 12:37:51.716: INFO: Got endpoints: latency-svc-9m6kq [2.45030884s]
Dec 22 12:37:51.876: INFO: Created: latency-svc-9fr8t
Dec 22 12:37:51.890: INFO: Got endpoints: latency-svc-9fr8t [2.426691364s]
Dec 22 12:37:52.041: INFO: Created: latency-svc-gtwkl
Dec 22 12:37:52.065: INFO: Got endpoints: latency-svc-gtwkl [2.459160025s]
Dec 22 12:37:52.106: INFO: Created: latency-svc-hmcpg
Dec 22 12:37:52.254: INFO: Got endpoints: latency-svc-hmcpg [2.367769902s]
Dec 22 12:37:52.274: INFO: Created: latency-svc-nnmmb
Dec 22 12:37:52.312: INFO: Got endpoints: latency-svc-nnmmb [2.306806308s]
Dec 22 12:37:52.328: INFO: Created: latency-svc-r25gd
Dec 22 12:37:52.343: INFO: Got endpoints: latency-svc-r25gd [2.183210005s]
Dec 22 12:37:52.451: INFO: Created: latency-svc-wdl2f
Dec 22 12:37:52.490: INFO: Got endpoints: latency-svc-wdl2f [2.268697368s]
Dec 22 12:37:52.697: INFO: Created: latency-svc-hhnfl
Dec 22 12:37:52.737: INFO: Got endpoints: latency-svc-hhnfl [2.377830749s]
Dec 22 12:37:52.996: INFO: Created: latency-svc-5gthq
Dec 22 12:37:53.022: INFO: Got endpoints: latency-svc-5gthq [2.380316556s]
Dec 22 12:37:54.269: INFO: Created: latency-svc-4q22r
Dec 22 12:37:54.732: INFO: Got endpoints: latency-svc-4q22r [4.065505538s]
Dec 22 12:37:54.811: INFO: Created: latency-svc-jf2hz
Dec 22 12:37:54.957: INFO: Got endpoints: latency-svc-jf2hz [3.918259349s]
Dec 22 12:37:55.000: INFO: Created: latency-svc-zcmk8
Dec 22 12:37:55.014: INFO: Got endpoints: latency-svc-zcmk8 [3.917186284s]
Dec 22 12:37:55.163: INFO: Created: latency-svc-jpwtv
Dec 22 12:37:55.216: INFO: Got endpoints: latency-svc-jpwtv [3.989125718s]
Dec 22 12:37:55.242: INFO: Created: latency-svc-t4c49
Dec 22 12:37:55.303: INFO: Got endpoints: latency-svc-t4c49 [3.910690911s]
Dec 22 12:37:55.480: INFO: Created: latency-svc-ldhkx
Dec 22 12:37:55.493: INFO: Got endpoints: latency-svc-ldhkx [4.009033935s]
Dec 22 12:37:55.685: INFO: Created: latency-svc-zr7fc
Dec 22 12:37:55.702: INFO: Got endpoints: latency-svc-zr7fc [3.985017752s]
Dec 22 12:37:55.753: INFO: Created: latency-svc-m5jjk
Dec 22 12:37:55.891: INFO: Got endpoints: latency-svc-m5jjk [4.00073815s]
Dec 22 12:37:55.914: INFO: Created: latency-svc-6cfhg
Dec 22 12:37:55.974: INFO: Created: latency-svc-z8hz4
Dec 22 12:37:55.975: INFO: Got endpoints: latency-svc-6cfhg [3.909420887s]
Dec 22 12:37:56.078: INFO: Got endpoints: latency-svc-z8hz4 [3.82348827s]
Dec 22 12:37:56.135: INFO: Created: latency-svc-hgbpr
Dec 22 12:37:56.279: INFO: Got endpoints: latency-svc-hgbpr [3.966594393s]
Dec 22 12:37:56.298: INFO: Created: latency-svc-md87c
Dec 22 12:37:56.318: INFO: Got endpoints: latency-svc-md87c [3.975198191s]
Dec 22 12:37:56.371: INFO: Created: latency-svc-qg4jp
Dec 22 12:37:56.495: INFO: Created: latency-svc-9kkmq
Dec 22 12:37:56.550: INFO: Got endpoints: latency-svc-qg4jp [4.058685985s]
Dec 22 12:37:56.569: INFO: Got endpoints: latency-svc-9kkmq [3.832359549s]
Dec 22 12:37:56.783: INFO: Created: latency-svc-j4wmg
Dec 22 12:37:57.043: INFO: Got endpoints: latency-svc-j4wmg [4.020482737s]
Dec 22 12:37:57.103: INFO: Created: latency-svc-hwrjt
Dec 22 12:37:57.104: INFO: Got endpoints: latency-svc-hwrjt [2.37077508s]
Dec 22 12:37:57.274: INFO: Created: latency-svc-nlvnp
Dec 22 12:37:57.315: INFO: Got endpoints: latency-svc-nlvnp [2.357702459s]
Dec 22 12:37:57.577: INFO: Created: latency-svc-lrw92
Dec 22 12:37:57.647: INFO: Got endpoints: latency-svc-lrw92 [2.632366546s]
Dec 22 12:37:58.113: INFO: Created: latency-svc-gdfdw
Dec 22 12:37:58.203: INFO: Got endpoints: latency-svc-gdfdw [2.987229356s]
Dec 22 12:37:58.498: INFO: Created: latency-svc-x6w5n
Dec 22 12:37:58.534: INFO: Got endpoints: latency-svc-x6w5n [3.230620845s]
Dec 22 12:37:58.707: INFO: Created: latency-svc-prptb
Dec 22 12:37:58.718: INFO: Got endpoints: latency-svc-prptb [3.224849372s]
Dec 22 12:37:58.882: INFO: Created: latency-svc-k65ll
Dec 22 12:37:58.888: INFO: Got endpoints: latency-svc-k65ll [3.185178157s]
Dec 22 12:37:58.936: INFO: Created: latency-svc-vb68r
Dec 22 12:37:59.083: INFO: Got endpoints: latency-svc-vb68r [3.191623022s]
Dec 22 12:37:59.116: INFO: Created: latency-svc-2m84r
Dec 22 12:37:59.143: INFO: Got endpoints: latency-svc-2m84r [3.167473991s]
Dec 22 12:37:59.391: INFO: Created: latency-svc-qr47c
Dec 22 12:37:59.409: INFO: Got endpoints: latency-svc-qr47c [3.330776205s]
Dec 22 12:37:59.577: INFO: Created: latency-svc-b6tx5
Dec 22 12:37:59.661: INFO: Got endpoints: latency-svc-b6tx5 [3.382687553s]
Dec 22 12:37:59.837: INFO: Created: latency-svc-4xdhl
Dec 22 12:37:59.894: INFO: Created: latency-svc-kp7qp
Dec 22 12:37:59.896: INFO: Got endpoints: latency-svc-4xdhl [3.577503551s]
Dec 22 12:37:59.926: INFO: Got endpoints: latency-svc-kp7qp [3.37541534s]
Dec 22 12:38:00.023: INFO: Created: latency-svc-cgvhj
Dec 22 12:38:00.028: INFO: Got endpoints: latency-svc-cgvhj [3.457823179s]
Dec 22 12:38:00.087: INFO: Created: latency-svc-6ntz5
Dec 22 12:38:00.101: INFO: Got endpoints: latency-svc-6ntz5 [3.058256092s]
Dec 22 12:38:00.233: INFO: Created: latency-svc-scfxx
Dec 22 12:38:00.249: INFO: Got endpoints: latency-svc-scfxx [3.145320757s]
Dec 22 12:38:00.323: INFO: Created: latency-svc-7bt2n
Dec 22 12:38:00.402: INFO: Got endpoints: latency-svc-7bt2n [3.087376795s]
Dec 22 12:38:00.435: INFO: Created: latency-svc-vnntd
Dec 22 12:38:00.453: INFO: Got endpoints: latency-svc-vnntd [2.805679597s]
Dec 22 12:38:00.587: INFO: Created: latency-svc-rkdrs
Dec 22 12:38:00.602: INFO: Got endpoints: latency-svc-rkdrs [2.398523498s]
Dec 22 12:38:00.778: INFO: Created: latency-svc-4ctch
Dec 22 12:38:00.806: INFO: Got endpoints: latency-svc-4ctch [2.266793147s]
Dec 22 12:38:01.013: INFO: Created: latency-svc-pzbl2
Dec 22 12:38:01.023: INFO: Got endpoints: latency-svc-pzbl2 [2.30472349s]
Dec 22 12:38:01.229: INFO: Created: latency-svc-htkfr
Dec 22 12:38:01.249: INFO: Got endpoints: latency-svc-htkfr [2.360491134s]
Dec 22 12:38:01.431: INFO: Created: latency-svc-8dfmc
Dec 22 12:38:01.512: INFO: Got endpoints: latency-svc-8dfmc [2.428286891s]
Dec 22 12:38:01.749: INFO: Created: latency-svc-cpm9l
Dec 22 12:38:01.883: INFO: Got endpoints: latency-svc-cpm9l [2.739974191s]
Dec 22 12:38:01.898: INFO: Created: latency-svc-rlgdn
Dec 22 12:38:01.909: INFO: Got endpoints: latency-svc-rlgdn [2.500397426s]
Dec 22 12:38:02.055: INFO: Created: latency-svc-wn2qn
Dec 22 12:38:02.056: INFO: Got endpoints: latency-svc-wn2qn [2.394021621s]
Dec 22 12:38:02.107: INFO: Created: latency-svc-h8f8z
Dec 22 12:38:02.232: INFO: Got endpoints: latency-svc-h8f8z [2.335558328s]
Dec 22 12:38:02.261: INFO: Created: latency-svc-88lwb
Dec 22 12:38:02.299: INFO: Got endpoints: latency-svc-88lwb [2.372523411s]
Dec 22 12:38:02.395: INFO: Created: latency-svc-wn8st
Dec 22 12:38:02.430: INFO: Got endpoints: latency-svc-wn8st [2.401527752s]
Dec 22 12:38:02.585: INFO: Created: latency-svc-lj92h
Dec 22 12:38:02.630: INFO: Got endpoints: latency-svc-lj92h [2.528804809s]
Dec 22 12:38:02.696: INFO: Created: latency-svc-p4qw7
Dec 22 12:38:02.788: INFO: Got endpoints: latency-svc-p4qw7 [2.538425812s]
Dec 22 12:38:03.004: INFO: Created: latency-svc-xw5tw
Dec 22 12:38:03.031: INFO: Got endpoints: latency-svc-xw5tw [2.62864359s]
Dec 22 12:38:03.080: INFO: Created: latency-svc-nmg4b
Dec 22 12:38:03.208: INFO: Got endpoints: latency-svc-nmg4b [2.755432886s]
Dec 22 12:38:03.286: INFO: Created: latency-svc-f65sx
Dec 22 12:38:03.403: INFO: Got endpoints: latency-svc-f65sx [2.800755462s]
Dec 22 12:38:03.451: INFO: Created: latency-svc-48hxs
Dec 22 12:38:03.620: INFO: Got endpoints: latency-svc-48hxs [2.813488504s]
Dec 22 12:38:03.658: INFO: Created: latency-svc-s6bxf
Dec 22 12:38:03.827: INFO: Got endpoints: latency-svc-s6bxf [2.803664114s]
Dec 22 12:38:03.836: INFO: Created: latency-svc-v9t55
Dec 22 12:38:03.858: INFO: Got endpoints: latency-svc-v9t55 [2.608941111s]
Dec 22 12:38:03.988: INFO: Created: latency-svc-f8sbc
Dec 22 12:38:04.011: INFO: Got endpoints: latency-svc-f8sbc [2.499260548s]
Dec 22 12:38:04.162: INFO: Created: latency-svc-d49bq
Dec 22 12:38:04.183: INFO: Got endpoints: latency-svc-d49bq [2.299615043s]
Dec 22 12:38:04.394: INFO: Created: latency-svc-v4s62
Dec 22 12:38:04.412: INFO: Got endpoints: latency-svc-v4s62 [2.502597446s]
Dec 22 12:38:04.548: INFO: Created: latency-svc-gh6wp
Dec 22 12:38:04.594: INFO: Got endpoints: latency-svc-gh6wp [2.537792107s]
Dec 22 12:38:04.708: INFO: Created: latency-svc-prwxv
Dec 22 12:38:04.745: INFO: Got endpoints: latency-svc-prwxv [2.513162838s]
Dec 22 12:38:04.755: INFO: Created: latency-svc-2kzfp
Dec 22 12:38:04.764: INFO: Got endpoints: latency-svc-2kzfp [2.465333627s]
Dec 22 12:38:04.765: INFO: Latencies: [371.983475ms 570.028188ms 644.598317ms 945.866034ms 1.122048282s 1.368900597s 1.432642623s 1.588816902s 1.764076352s 1.835920737s 2.015110201s 2.065408133s 2.071589669s 2.0955136s 2.106902847s 2.168185552s 2.183210005s 2.193948037s 2.230506047s 2.231747788s 2.266793147s 2.268697368s 2.296341129s 2.299615043s 2.30472349s 2.306806308s 2.319002913s 2.327798124s 2.332620822s 2.335558328s 2.339897164s 2.348912001s 2.357702459s 2.360491134s 2.367769902s 2.37077508s 2.372523411s 2.376086337s 2.377830749s 2.380316556s 2.394021621s 2.398523498s 2.401527752s 2.409115382s 2.413905871s 2.420698621s 2.426691364s 2.428286891s 2.448075125s 2.45030884s 2.452685893s 2.459160025s 2.465333627s 2.480912281s 2.484147093s 2.486218396s 2.493255294s 2.493522837s 2.493614209s 2.499260548s 2.500397426s 2.502597446s 2.513162838s 2.513258948s 2.516749795s 2.527065864s 2.527150933s 2.528804809s 2.535901977s 2.537792107s 2.538425812s 2.549553072s 2.550296004s 2.550848971s 2.55632525s 2.583049508s 2.584682573s 2.587977169s 2.608941111s 2.616517723s 2.624372582s 2.62864359s 2.629389398s 2.630496993s 2.63137819s 2.632366546s 2.637089354s 2.648247507s 2.664793543s 2.668182949s 2.678766439s 2.683102409s 2.689772407s 2.695526402s 2.696418967s 2.700858922s 2.718115576s 2.725656711s 2.725944612s 2.732813108s 2.739974191s 2.74996035s 2.753168279s 2.755432886s 2.76518927s 2.769749028s 2.785019132s 2.786474228s 2.800269625s 2.800755462s 2.802308441s 2.803664114s 2.805679597s 2.811950258s 2.812601784s 2.813488504s 2.814103288s 2.822507346s 2.842345639s 2.857624947s 2.87043342s 2.885107976s 2.904029602s 2.910586231s 2.912158702s 2.928186425s 2.978003715s 2.987229356s 3.003656295s 3.006934758s 3.058256092s 3.066629708s 3.067029559s 3.073273506s 3.087376795s 3.099426627s 3.109426196s 3.121054837s 3.137585061s 3.145320757s 3.146739374s 3.167473991s 3.185178157s 3.191623022s 3.200718018s 3.224849372s 3.230620845s 3.241254545s 3.2429462s 3.25077063s 3.263607997s 3.274266829s 3.280383487s 3.289729011s 3.295111965s 3.330776205s 3.364239325s 3.37163958s 3.374953502s 3.375284046s 3.37541534s 3.382687553s 3.457823179s 3.465431997s 3.472589591s 3.494378906s 3.52231771s 3.564408118s 3.577503551s 3.581476152s 3.584806455s 3.590699605s 3.625663885s 3.628547506s 3.670625912s 3.709082797s 3.717653487s 3.769391509s 3.82348827s 3.832359549s 3.839527078s 3.909420887s 3.910690911s 3.917186284s 3.918259349s 3.966594393s 3.975198191s 3.985017752s 3.987159788s 3.989125718s 4.00073815s 4.009033935s 4.020482737s 4.053382656s 4.058685985s 4.063812085s 4.065505538s 4.139333896s 4.248351629s 4.30931732s]
Dec 22 12:38:04.765: INFO: 50 %ile: 2.739974191s
Dec 22 12:38:04.765: INFO: 90 %ile: 3.839527078s
Dec 22 12:38:04.765: INFO: 99 %ile: 4.248351629s
Dec 22 12:38:04.765: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:38:04.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-m2psh" for this suite.
Dec 22 12:39:00.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:39:00.995: INFO: namespace: e2e-tests-svc-latency-m2psh, resource: bindings, ignored listing per whitelist
Dec 22 12:39:01.236: INFO: namespace e2e-tests-svc-latency-m2psh deletion completed in 56.453803916s

• [SLOW TEST:105.994 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:39:01.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0802e32d-24b8-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:39:01.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-jm4nt" to be "success or failure"
Dec 22 12:39:01.991: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.390482ms
Dec 22 12:39:04.474: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544527437s
Dec 22 12:39:06.493: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.563560526s
Dec 22 12:39:08.860: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.930618582s
Dec 22 12:39:10.879: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.94919375s
Dec 22 12:39:12.893: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.964036088s
Dec 22 12:39:14.907: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.977453848s
STEP: Saw pod success
Dec 22 12:39:14.907: INFO: Pod "pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:39:14.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 12:39:15.754: INFO: Waiting for pod pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005 to disappear
Dec 22 12:39:15.868: INFO: Pod pod-projected-configmaps-081863bd-24b8-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:39:15.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jm4nt" for this suite.
Dec 22 12:39:21.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:39:22.067: INFO: namespace: e2e-tests-projected-jm4nt, resource: bindings, ignored listing per whitelist
Dec 22 12:39:22.133: INFO: namespace e2e-tests-projected-jm4nt deletion completed in 6.186635695s

• [SLOW TEST:20.895 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:39:22.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 22 12:39:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b4q49'
Dec 22 12:39:24.681: INFO: stderr: ""
Dec 22 12:39:24.681: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 22 12:39:26.092: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:26.092: INFO: Found 0 / 1
Dec 22 12:39:26.703: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:26.703: INFO: Found 0 / 1
Dec 22 12:39:27.912: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:27.913: INFO: Found 0 / 1
Dec 22 12:39:28.747: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:28.747: INFO: Found 0 / 1
Dec 22 12:39:29.699: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:29.699: INFO: Found 0 / 1
Dec 22 12:39:31.332: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:31.333: INFO: Found 0 / 1
Dec 22 12:39:31.700: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:31.700: INFO: Found 0 / 1
Dec 22 12:39:32.703: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:32.703: INFO: Found 0 / 1
Dec 22 12:39:33.702: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:33.702: INFO: Found 0 / 1
Dec 22 12:39:34.700: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:34.700: INFO: Found 0 / 1
Dec 22 12:39:35.701: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:35.701: INFO: Found 1 / 1
Dec 22 12:39:35.701: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 22 12:39:35.715: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:35.716: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 22 12:39:35.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nm5kd --namespace=e2e-tests-kubectl-b4q49 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 22 12:39:35.923: INFO: stderr: ""
Dec 22 12:39:35.923: INFO: stdout: "pod/redis-master-nm5kd patched\n"
STEP: checking annotations
Dec 22 12:39:35.944: INFO: Selector matched 1 pods for map[app:redis]
Dec 22 12:39:35.945: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:39:35.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b4q49" for this suite.
Dec 22 12:39:59.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:40:00.240: INFO: namespace: e2e-tests-kubectl-b4q49, resource: bindings, ignored listing per whitelist
Dec 22 12:40:00.256: INFO: namespace e2e-tests-kubectl-b4q49 deletion completed in 24.304733282s

• [SLOW TEST:38.123 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:40:00.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 22 12:40:11.134: INFO: Successfully updated pod "annotationupdate2b1715bf-24b8-11ea-b023-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:40:13.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-knbxl" for this suite.
Dec 22 12:40:35.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:40:35.408: INFO: namespace: e2e-tests-projected-knbxl, resource: bindings, ignored listing per whitelist
Dec 22 12:40:35.465: INFO: namespace e2e-tests-projected-knbxl deletion completed in 22.183341099s

• [SLOW TEST:35.208 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:40:35.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 22 12:40:35.775: INFO: Waiting up to 5m0s for pod "pod-402a41d5-24b8-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-cdncb" to be "success or failure"
Dec 22 12:40:35.799: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.160362ms
Dec 22 12:40:37.861: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086070815s
Dec 22 12:40:39.888: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112748198s
Dec 22 12:40:41.944: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169500265s
Dec 22 12:40:43.963: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187729173s
Dec 22 12:40:45.981: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.206169566s
STEP: Saw pod success
Dec 22 12:40:45.981: INFO: Pod "pod-402a41d5-24b8-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:40:45.988: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-402a41d5-24b8-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 12:40:46.087: INFO: Waiting for pod pod-402a41d5-24b8-11ea-b023-0242ac110005 to disappear
Dec 22 12:40:46.110: INFO: Pod pod-402a41d5-24b8-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:40:46.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cdncb" for this suite.
Dec 22 12:40:52.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:40:52.262: INFO: namespace: e2e-tests-emptydir-cdncb, resource: bindings, ignored listing per whitelist
Dec 22 12:40:52.286: INFO: namespace e2e-tests-emptydir-cdncb deletion completed in 6.161230521s

• [SLOW TEST:16.821 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:40:52.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 12:40:52.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-9xrrv" to be "success or failure"
Dec 22 12:40:52.565: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.681398ms
Dec 22 12:40:54.593: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131809411s
Dec 22 12:40:56.640: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178668555s
Dec 22 12:40:58.952: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490476523s
Dec 22 12:41:00.988: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52652885s
Dec 22 12:41:03.018: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556970047s
STEP: Saw pod success
Dec 22 12:41:03.018: INFO: Pod "downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:41:03.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 12:41:03.480: INFO: Waiting for pod downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005 to disappear
Dec 22 12:41:03.500: INFO: Pod downwardapi-volume-4a1f947f-24b8-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:41:03.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9xrrv" for this suite.
Dec 22 12:41:09.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:41:09.835: INFO: namespace: e2e-tests-downward-api-9xrrv, resource: bindings, ignored listing per whitelist
Dec 22 12:41:09.869: INFO: namespace e2e-tests-downward-api-9xrrv deletion completed in 6.355068088s

• [SLOW TEST:17.583 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:41:09.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lbm9x
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-lbm9x
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-lbm9x
Dec 22 12:41:10.193: INFO: Found 0 stateful pods, waiting for 1
Dec 22 12:41:20.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Dec 22 12:41:30.205: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 22 12:41:30.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 12:41:30.961: INFO: stderr: ""
Dec 22 12:41:30.961: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 12:41:30.961: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 12:41:30.978: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 22 12:41:41.007: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 12:41:41.007: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 12:41:41.108: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:41:41.109: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:41:41.109: INFO: 
Dec 22 12:41:41.109: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 22 12:41:42.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.94626044s
Dec 22 12:41:43.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.443376592s
Dec 22 12:41:45.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.389206361s
Dec 22 12:41:46.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935130334s
Dec 22 12:41:47.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.904446513s
Dec 22 12:41:48.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.880294049s
Dec 22 12:41:50.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.863948832s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-lbm9x
Dec 22 12:41:51.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:41:53.318: INFO: stderr: ""
Dec 22 12:41:53.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 12:41:53.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 12:41:53.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:41:53.664: INFO: rc: 1
Dec 22 12:41:53.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000de61b0 exit status 1   true [0xc000970000 0xc000970018 0xc000970030] [0xc000970000 0xc000970018 0xc000970030] [0xc000970010 0xc000970028] [0x935700 0x935700] 0xc0027e0240 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 22 12:42:03.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:04.684: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 22 12:42:04.684: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 12:42:04.684: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 12:42:04.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:05.168: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 22 12:42:05.169: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 22 12:42:05.169: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 22 12:42:05.189: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 12:42:05.189: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 12:42:05.189: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 22 12:42:05.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 12:42:05.668: INFO: stderr: ""
Dec 22 12:42:05.668: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 12:42:05.668: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 12:42:05.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 12:42:06.199: INFO: stderr: ""
Dec 22 12:42:06.200: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 12:42:06.200: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 12:42:06.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 22 12:42:06.631: INFO: stderr: ""
Dec 22 12:42:06.631: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 22 12:42:06.631: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 22 12:42:06.631: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 12:42:06.641: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 22 12:42:16.693: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 12:42:16.693: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 12:42:16.693: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 22 12:42:16.725: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:16.725: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:16.726: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:16.726: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:16.726: INFO: 
Dec 22 12:42:16.726: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:19.240: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:19.241: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:19.241: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:19.241: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:19.241: INFO: 
Dec 22 12:42:19.241: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:20.397: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:20.397: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:20.397: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:20.397: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:20.397: INFO: 
Dec 22 12:42:20.397: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:21.462: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:21.462: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:21.462: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:21.462: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:21.462: INFO: 
Dec 22 12:42:21.462: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:23.266: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:23.266: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:23.266: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:23.267: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:23.267: INFO: 
Dec 22 12:42:23.267: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:24.766: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:24.767: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:24.767: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:24.767: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:24.767: INFO: 
Dec 22 12:42:24.767: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 22 12:42:25.796: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 22 12:42:25.796: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:10 +0000 UTC  }]
Dec 22 12:42:25.797: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:25.797: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:42:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:41:41 +0000 UTC  }]
Dec 22 12:42:25.797: INFO: 
Dec 22 12:42:25.797: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-lbm9x
Dec 22 12:42:26.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:27.192: INFO: rc: 1
Dec 22 12:42:27.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000f661b0 exit status 1   true [0xc00045ae98 0xc00045aed8 0xc00045af38] [0xc00045ae98 0xc00045aed8 0xc00045af38] [0xc00045aec8 0xc00045af18] [0x935700 0x935700] 0xc0027dd740 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 22 12:42:37.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:37.395: INFO: rc: 1
Dec 22 12:42:37.396: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000de7140 exit status 1   true [0xc0009700e8 0xc000970100 0xc000970118] [0xc0009700e8 0xc000970100 0xc000970118] [0xc0009700f8 0xc000970110] [0x935700 0x935700] 0xc0024523c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:42:47.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:47.595: INFO: rc: 1
Dec 22 12:42:47.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000de7290 exit status 1   true [0xc000970120 0xc000970138 0xc000970150] [0xc000970120 0xc000970138 0xc000970150] [0xc000970130 0xc000970148] [0x935700 0x935700] 0xc0024526c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:42:57.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:42:57.752: INFO: rc: 1
Dec 22 12:42:57.752: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f66300 exit status 1   true [0xc00045af68 0xc00045afb8 0xc00045aff8] [0xc00045af68 0xc00045afb8 0xc00045aff8] [0xc00045af90 0xc00045afd0] [0x935700 0x935700] 0xc0027dda40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:07.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:07.904: INFO: rc: 1
Dec 22 12:43:07.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f07830 exit status 1   true [0xc001accec8 0xc001accf68 0xc001acd060] [0xc001accec8 0xc001accf68 0xc001acd060] [0xc001accf48 0xc001accff8] [0x935700 0x935700] 0xc002434360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:17.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:18.020: INFO: rc: 1
Dec 22 12:43:18.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f66480 exit status 1   true [0xc00045b020 0xc00045b048 0xc00045b090] [0xc00045b020 0xc00045b048 0xc00045b090] [0xc00045b040 0xc00045b078] [0x935700 0x935700] 0xc0027ddd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:28.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:28.736: INFO: rc: 1
Dec 22 12:43:28.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb8150 exit status 1   true [0xc0000e80e8 0xc0000e82c0 0xc001dda048] [0xc0000e80e8 0xc0000e82c0 0xc001dda048] [0xc0000e8240 0xc001dda038] [0x935700 0x935700] 0xc0027e0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:38.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:38.873: INFO: rc: 1
Dec 22 12:43:38.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002aecc0 exit status 1   true [0xc000970000 0xc000970018 0xc000970030] [0xc000970000 0xc000970018 0xc000970030] [0xc000970010 0xc000970028] [0x935700 0x935700] 0xc0027eea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:48.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:49.021: INFO: rc: 1
Dec 22 12:43:49.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002aede0 exit status 1   true [0xc000970038 0xc000970050 0xc000970068] [0xc000970038 0xc000970050 0xc000970068] [0xc000970048 0xc000970060] [0x935700 0x935700] 0xc0027ef080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:43:59.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:43:59.159: INFO: rc: 1
Dec 22 12:43:59.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb82a0 exit status 1   true [0xc001dda060 0xc001dda0c0 0xc001dda118] [0xc001dda060 0xc001dda0c0 0xc001dda118] [0xc001dda088 0xc001dda100] [0x935700 0x935700] 0xc0027e0540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:09.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:44:09.310: INFO: rc: 1
Dec 22 12:44:09.310: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb83f0 exit status 1   true [0xc001dda130 0xc001dda178 0xc001dda1e0] [0xc001dda130 0xc001dda178 0xc001dda1e0] [0xc001dda168 0xc001dda1b8] [0x935700 0x935700] 0xc0027e0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:19.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:44:19.478: INFO: rc: 1
Dec 22 12:44:19.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002aef60 exit status 1   true [0xc000970070 0xc000970088 0xc0009700a0] [0xc000970070 0xc000970088 0xc0009700a0] [0xc000970080 0xc000970098] [0x935700 0x935700] 0xc0027ef500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:29.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:44:29.627: INFO: rc: 1
Dec 22 12:44:29.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af080 exit status 1   true [0xc0009700a8 0xc0009700c0 0xc0009700d8] [0xc0009700a8 0xc0009700c0 0xc0009700d8] [0xc0009700b8 0xc0009700d0] [0x935700 0x935700] 0xc0027efa40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:39.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:44:39.768: INFO: rc: 1
Dec 22 12:44:39.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af1a0 exit status 1   true [0xc0009700e0 0xc0009700f8 0xc000970110] [0xc0009700e0 0xc0009700f8 0xc000970110] [0xc0009700f0 0xc000970108] [0x935700 0x935700] 0xc00218e180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:49.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:44:49.937: INFO: rc: 1
Dec 22 12:44:49.937: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af2c0 exit status 1   true [0xc000970118 0xc000970130 0xc000970148] [0xc000970118 0xc000970130 0xc000970148] [0xc000970128 0xc000970140] [0x935700 0x935700] 0xc00218e4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:44:59.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:00.072: INFO: rc: 1
Dec 22 12:45:00.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb85d0 exit status 1   true [0xc001dda1f0 0xc001dda238 0xc001dda280] [0xc001dda1f0 0xc001dda238 0xc001dda280] [0xc001dda218 0xc001dda270] [0x935700 0x935700] 0xc0027db380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:45:10.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:10.202: INFO: rc: 1
Dec 22 12:45:10.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af3e0 exit status 1   true [0xc000970150 0xc000970168 0xc000970180] [0xc000970150 0xc000970168 0xc000970180] [0xc000970160 0xc000970178] [0x935700 0x935700] 0xc00218e7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:45:20.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:20.370: INFO: rc: 1
Dec 22 12:45:20.370: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00026fe00 exit status 1   true [0xc001acc000 0xc001acc0d0 0xc001acc168] [0xc001acc000 0xc001acc0d0 0xc001acc168] [0xc001acc070 0xc001acc150] [0x935700 0x935700] 0xc0017da2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:45:30.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:30.515: INFO: rc: 1
Dec 22 12:45:30.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00026fda0 exit status 1   true [0xc0000e8210 0xc00000e010 0xc001dda048] [0xc0000e8210 0xc00000e010 0xc001dda048] [0xc0000e82c0 0xc001dda038] [0x935700 0x935700] 0xc0027eea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:45:40.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:40.668: INFO: rc: 1
Dec 22 12:45:40.668: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb8180 exit status 1   true [0xc000970000 0xc000970018 0xc000970030] [0xc000970000 0xc000970018 0xc000970030] [0xc000970010 0xc000970028] [0x935700 0x935700] 0xc0027e0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:45:50.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:45:50.780: INFO: rc: 1
Dec 22 12:45:50.780: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb82d0 exit status 1   true [0xc000970038 0xc000970050 0xc000970068] [0xc000970038 0xc000970050 0xc000970068] [0xc000970048 0xc000970060] [0x935700 0x935700] 0xc0027e0540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:00.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:01.004: INFO: rc: 1
Dec 22 12:46:01.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fb8450 exit status 1   true [0xc000970070 0xc000970088 0xc0009700a0] [0xc000970070 0xc000970088 0xc0009700a0] [0xc000970080 0xc000970098] [0x935700 0x935700] 0xc0027e0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:11.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:11.157: INFO: rc: 1
Dec 22 12:46:11.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000109b00 exit status 1   true [0xc001acc000 0xc001acc0d0 0xc001acc168] [0xc001acc000 0xc001acc0d0 0xc001acc168] [0xc001acc070 0xc001acc150] [0x935700 0x935700] 0xc0027db320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:21.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:21.326: INFO: rc: 1
Dec 22 12:46:21.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000109c20 exit status 1   true [0xc001acc180 0xc001acc1f8 0xc001acc278] [0xc001acc180 0xc001acc1f8 0xc001acc278] [0xc001acc1c8 0xc001acc258] [0x935700 0x935700] 0xc0027db620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:31.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:31.491: INFO: rc: 1
Dec 22 12:46:31.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000109d40 exit status 1   true [0xc001acc288 0xc001acc318 0xc001acc3d0] [0xc001acc288 0xc001acc318 0xc001acc3d0] [0xc001acc2f0 0xc001acc3b8] [0x935700 0x935700] 0xc0027db920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:41.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:41.673: INFO: rc: 1
Dec 22 12:46:41.673: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002aed20 exit status 1   true [0xc00045a028 0xc00045a0a8 0xc00045a118] [0xc00045a028 0xc00045a0a8 0xc00045a118] [0xc00045a0a0 0xc00045a0e0] [0x935700 0x935700] 0xc00218e240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:46:51.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:46:52.508: INFO: rc: 1
Dec 22 12:46:52.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002aeed0 exit status 1   true [0xc00045a138 0xc00045a160 0xc00045a1c0] [0xc00045a138 0xc00045a160 0xc00045a1c0] [0xc00045a158 0xc00045a180] [0x935700 0x935700] 0xc00218e5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:47:02.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:47:02.624: INFO: rc: 1
Dec 22 12:47:02.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af020 exit status 1   true [0xc00045a1d0 0xc00045a200 0xc00045a250] [0xc00045a1d0 0xc00045a200 0xc00045a250] [0xc00045a1f0 0xc00045a240] [0x935700 0x935700] 0xc00218f9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:47:12.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:47:12.764: INFO: rc: 1
Dec 22 12:47:12.764: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002af1d0 exit status 1   true [0xc00045a288 0xc00045a310 0xc00045a3b0] [0xc00045a288 0xc00045a310 0xc00045a3b0] [0xc00045a2f0 0xc00045a378] [0x935700 0x935700] 0xc00218fce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:47:22.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:47:22.962: INFO: rc: 1
Dec 22 12:47:22.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000109ec0 exit status 1   true [0xc001acc410 0xc001acc460 0xc001acc538] [0xc001acc410 0xc001acc460 0xc001acc538] [0xc001acc440 0xc001acc4e0] [0x935700 0x935700] 0xc0027dbc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 22 12:47:32.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lbm9x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 22 12:47:33.090: INFO: rc: 1
Dec 22 12:47:33.090: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 22 12:47:33.090: INFO: Scaling statefulset ss to 0
Dec 22 12:47:33.115: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 22 12:47:33.118: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lbm9x
Dec 22 12:47:33.122: INFO: Scaling statefulset ss to 0
Dec 22 12:47:33.136: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 12:47:33.140: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:47:33.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lbm9x" for this suite.
Dec 22 12:47:39.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:47:39.406: INFO: namespace: e2e-tests-statefulset-lbm9x, resource: bindings, ignored listing per whitelist
Dec 22 12:47:39.446: INFO: namespace e2e-tests-statefulset-lbm9x deletion completed in 6.263492632s

• [SLOW TEST:389.576 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:47:39.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:47:39.785: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 22 12:47:44.799: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 22 12:47:50.817: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 22 12:47:52.837: INFO: Creating deployment "test-rollover-deployment"
Dec 22 12:47:52.895: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 22 12:47:54.956: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 22 12:47:55.028: INFO: Ensure that both replica sets have 1 created replica
Dec 22 12:47:55.124: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 22 12:47:55.188: INFO: Updating deployment test-rollover-deployment
Dec 22 12:47:55.188: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 22 12:47:57.534: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 22 12:47:57.542: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 22 12:47:57.550: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:47:57.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:47:59.574: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:47:59.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:01.588: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:01.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:04.175: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:04.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:06.049: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:06.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:07.576: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:07.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615676, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:09.582: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:09.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:11.573: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:11.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:13.570: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:13.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:15.573: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:15.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:17.570: INFO: all replica sets need to contain the pod-template-hash label
Dec 22 12:48:17.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615673, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 12:48:20.294: INFO: 
Dec 22 12:48:20.294: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 22 12:48:20.323: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-bb4sl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bb4sl/deployments/test-rollover-deployment,UID:44b40a7a-24b9-11ea-a994-fa163e34d433,ResourceVersion:15682683,Generation:2,CreationTimestamp:2019-12-22 12:47:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-22 12:47:53 +0000 UTC 2019-12-22 12:47:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-22 12:48:18 +0000 UTC 2019-12-22 12:47:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 22 12:48:20.334: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-bb4sl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bb4sl/replicasets/test-rollover-deployment-5b8479fdb6,UID:461b9b9a-24b9-11ea-a994-fa163e34d433,ResourceVersion:15682673,Generation:2,CreationTimestamp:2019-12-22 12:47:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 44b40a7a-24b9-11ea-a994-fa163e34d433 0xc001599d67 0xc001599d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 22 12:48:20.334: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 22 12:48:20.335: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-bb4sl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bb4sl/replicasets/test-rollover-controller,UID:3ce8757d-24b9-11ea-a994-fa163e34d433,ResourceVersion:15682682,Generation:2,CreationTimestamp:2019-12-22 12:47:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 44b40a7a-24b9-11ea-a994-fa163e34d433 0xc001599bd7 0xc001599bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 12:48:20.335: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-bb4sl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bb4sl/replicasets/test-rollover-deployment-58494b7559,UID:44c5e165-24b9-11ea-a994-fa163e34d433,ResourceVersion:15682638,Generation:2,CreationTimestamp:2019-12-22 12:47:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 44b40a7a-24b9-11ea-a994-fa163e34d433 0xc001599c97 0xc001599c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 12:48:20.355: INFO: Pod "test-rollover-deployment-5b8479fdb6-86v2v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-86v2v,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-bb4sl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bb4sl/pods/test-rollover-deployment-5b8479fdb6-86v2v,UID:46d14f36-24b9-11ea-a994-fa163e34d433,ResourceVersion:15682658,Generation:0,CreationTimestamp:2019-12-22 12:47:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 461b9b9a-24b9-11ea-a994-fa163e34d433 0xc0021b1367 0xc0021b1368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-46zrd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-46zrd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-46zrd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021b13e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021b1400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:47:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:48:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:48:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:47:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-22 12:47:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-22 12:48:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ffbe6130336c217f8b306d92e57b0d00084637c306166f9d10c33db630b44562}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:48:20.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-bb4sl" for this suite.
Dec 22 12:48:30.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:48:30.764: INFO: namespace: e2e-tests-deployment-bb4sl, resource: bindings, ignored listing per whitelist
Dec 22 12:48:30.916: INFO: namespace e2e-tests-deployment-bb4sl deletion completed in 10.547900361s

• [SLOW TEST:51.470 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:48:30.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 12:48:31.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-md8v9'
Dec 22 12:48:31.435: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 12:48:31.435: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 22 12:48:31.652: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-xp7t5]
Dec 22 12:48:31.652: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-xp7t5" in namespace "e2e-tests-kubectl-md8v9" to be "running and ready"
Dec 22 12:48:31.661: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.010982ms
Dec 22 12:48:33.903: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250068394s
Dec 22 12:48:35.944: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29198312s
Dec 22 12:48:37.967: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314282856s
Dec 22 12:48:39.980: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327923208s
Dec 22 12:48:42.000: INFO: Pod "e2e-test-nginx-rc-xp7t5": Phase="Running", Reason="", readiness=true. Elapsed: 10.347075987s
Dec 22 12:48:42.000: INFO: Pod "e2e-test-nginx-rc-xp7t5" satisfied condition "running and ready"
Dec 22 12:48:42.000: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-xp7t5]
Dec 22 12:48:42.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-md8v9'
Dec 22 12:48:42.307: INFO: stderr: ""
Dec 22 12:48:42.307: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 22 12:48:42.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-md8v9'
Dec 22 12:48:42.574: INFO: stderr: ""
Dec 22 12:48:42.575: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:48:42.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-md8v9" for this suite.
Dec 22 12:49:06.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:49:06.718: INFO: namespace: e2e-tests-kubectl-md8v9, resource: bindings, ignored listing per whitelist
Dec 22 12:49:06.753: INFO: namespace e2e-tests-kubectl-md8v9 deletion completed in 24.169834912s

• [SLOW TEST:35.837 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:49:06.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 22 12:49:06.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 22 12:49:07.222: INFO: stderr: ""
Dec 22 12:49:07.223: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:49:07.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z5hmc" for this suite.
Dec 22 12:49:15.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:49:15.447: INFO: namespace: e2e-tests-kubectl-z5hmc, resource: bindings, ignored listing per whitelist
Dec 22 12:49:15.458: INFO: namespace e2e-tests-kubectl-z5hmc deletion completed in 8.219147377s

• [SLOW TEST:8.704 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:49:15.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 22 12:49:15.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 22 12:49:15.718: INFO: stderr: ""
Dec 22 12:49:15.719: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:49:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d9sjb" for this suite.
Dec 22 12:49:21.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:49:22.107: INFO: namespace: e2e-tests-kubectl-d9sjb, resource: bindings, ignored listing per whitelist
Dec 22 12:49:22.133: INFO: namespace e2e-tests-kubectl-d9sjb deletion completed in 6.4066543s

• [SLOW TEST:6.674 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:49:22.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 22 12:49:32.428: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-7a0356dc-24b9-11ea-b023-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-rg2bz", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rg2bz/pods/pod-submit-remove-7a0356dc-24b9-11ea-b023-0242ac110005", UID:"7a07fe07-24b9-11ea-a994-fa163e34d433", ResourceVersion:"15682862", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712615762, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"281056543"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wmhnn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029638c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wmhnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029f4a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a026c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029f4ad0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029f4af0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029f4af8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029f4afc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615762, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615771, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615771, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712615762, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002a10680), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002a106a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://0f938b8efbd971d4cbdab7dce35fab95ccaf072c071140e54836e9ddeb3821e9"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:49:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rg2bz" for this suite.
Dec 22 12:49:48.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:49:48.794: INFO: namespace: e2e-tests-pods-rg2bz, resource: bindings, ignored listing per whitelist
Dec 22 12:49:48.920: INFO: namespace e2e-tests-pods-rg2bz deletion completed in 6.209915778s

• [SLOW TEST:26.787 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:49:48.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 22 12:50:01.799: INFO: Successfully updated pod "labelsupdate8a08bdc2-24b9-11ea-b023-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:50:03.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8zh8l" for this suite.
Dec 22 12:50:28.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:50:28.098: INFO: namespace: e2e-tests-downward-api-8zh8l, resource: bindings, ignored listing per whitelist
Dec 22 12:50:28.252: INFO: namespace e2e-tests-downward-api-8zh8l deletion completed in 24.270537013s

• [SLOW TEST:39.332 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:50:28.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:50:41.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5wr2v" for this suite.
Dec 22 12:50:47.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:50:48.008: INFO: namespace: e2e-tests-kubelet-test-5wr2v, resource: bindings, ignored listing per whitelist
Dec 22 12:50:48.027: INFO: namespace e2e-tests-kubelet-test-5wr2v deletion completed in 6.739939068s

• [SLOW TEST:19.774 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:50:48.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 22 12:50:48.283: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 22 12:50:48.293: INFO: Waiting for terminating namespaces to be deleted...
Dec 22 12:50:48.296: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 22 12:50:48.316: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 22 12:50:48.316: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 22 12:50:48.316: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:50:48.316: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 22 12:50:48.316: INFO: 	Container weave ready: true, restart count 0
Dec 22 12:50:48.316: INFO: 	Container weave-npc ready: true, restart count 0
Dec 22 12:50:48.316: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:50:48.316: INFO: 	Container coredns ready: true, restart count 0
Dec 22 12:50:48.316: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:50:48.316: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:50:48.316: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 22 12:50:48.316: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 22 12:50:48.316: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.425: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.425: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 22 12:50:48.426: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ad5c0833-24b9-11ea-b023-0242ac110005.15e2b2782567702e], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-t4xwr/filler-pod-ad5c0833-24b9-11ea-b023-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ad5c0833-24b9-11ea-b023-0242ac110005.15e2b2795486014a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ad5c0833-24b9-11ea-b023-0242ac110005.15e2b27a192cbaf5], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ad5c0833-24b9-11ea-b023-0242ac110005.15e2b27a5ba7ec26], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e2b27af2e87d29], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:51:01.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-t4xwr" for this suite.
Dec 22 12:51:09.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:51:10.042: INFO: namespace: e2e-tests-sched-pred-t4xwr, resource: bindings, ignored listing per whitelist
Dec 22 12:51:10.142: INFO: namespace e2e-tests-sched-pred-t4xwr deletion completed in 8.319977293s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.114 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:51:10.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-c8hzs
Dec 22 12:51:24.414: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-c8hzs
STEP: checking the pod's current state and verifying that restartCount is present
Dec 22 12:51:24.443: INFO: Initial restart count of pod liveness-http is 0
Dec 22 12:51:42.763: INFO: Restart count of pod e2e-tests-container-probe-c8hzs/liveness-http is now 1 (18.319683892s elapsed)
Dec 22 12:52:02.993: INFO: Restart count of pod e2e-tests-container-probe-c8hzs/liveness-http is now 2 (38.549527071s elapsed)
Dec 22 12:52:23.179: INFO: Restart count of pod e2e-tests-container-probe-c8hzs/liveness-http is now 3 (58.735656731s elapsed)
Dec 22 12:52:43.708: INFO: Restart count of pod e2e-tests-container-probe-c8hzs/liveness-http is now 4 (1m19.264794029s elapsed)
Dec 22 12:53:45.335: INFO: Restart count of pod e2e-tests-container-probe-c8hzs/liveness-http is now 5 (2m20.891487343s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:53:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c8hzs" for this suite.
Dec 22 12:53:51.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:53:51.654: INFO: namespace: e2e-tests-container-probe-c8hzs, resource: bindings, ignored listing per whitelist
Dec 22 12:53:51.757: INFO: namespace e2e-tests-container-probe-c8hzs deletion completed in 6.305119326s

• [SLOW TEST:161.615 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:53:51.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-bndr
STEP: Creating a pod to test atomic-volume-subpath
Dec 22 12:53:52.074: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bndr" in namespace "e2e-tests-subpath-tvg74" to be "success or failure"
Dec 22 12:53:52.081: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 7.445665ms
Dec 22 12:53:54.117: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043191694s
Dec 22 12:53:56.151: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077219263s
Dec 22 12:53:59.537: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 7.462742435s
Dec 22 12:54:01.558: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.484425136s
Dec 22 12:54:03.570: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.4965651s
Dec 22 12:54:05.606: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.532144081s
Dec 22 12:54:07.921: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 15.846721692s
Dec 22 12:54:09.972: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.897665827s
Dec 22 12:54:12.044: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 19.969703331s
Dec 22 12:54:14.057: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 21.98305056s
Dec 22 12:54:16.087: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 24.013483556s
Dec 22 12:54:18.107: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 26.033335128s
Dec 22 12:54:20.123: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 28.048771213s
Dec 22 12:54:22.142: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 30.06807725s
Dec 22 12:54:24.153: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 32.07915833s
Dec 22 12:54:26.170: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 34.095742407s
Dec 22 12:54:28.193: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Running", Reason="", readiness=false. Elapsed: 36.118691876s
Dec 22 12:54:30.205: INFO: Pod "pod-subpath-test-downwardapi-bndr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.13094073s
STEP: Saw pod success
Dec 22 12:54:30.205: INFO: Pod "pod-subpath-test-downwardapi-bndr" satisfied condition "success or failure"
Dec 22 12:54:30.209: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-bndr container test-container-subpath-downwardapi-bndr: 
STEP: delete the pod
Dec 22 12:54:32.004: INFO: Waiting for pod pod-subpath-test-downwardapi-bndr to disappear
Dec 22 12:54:32.698: INFO: Pod pod-subpath-test-downwardapi-bndr no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-bndr
Dec 22 12:54:32.698: INFO: Deleting pod "pod-subpath-test-downwardapi-bndr" in namespace "e2e-tests-subpath-tvg74"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:54:32.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-tvg74" for this suite.
Dec 22 12:54:40.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:54:40.861: INFO: namespace: e2e-tests-subpath-tvg74, resource: bindings, ignored listing per whitelist
Dec 22 12:54:41.233: INFO: namespace e2e-tests-subpath-tvg74 deletion completed in 8.513360653s

• [SLOW TEST:49.475 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:54:41.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 12:54:41.499: INFO: Creating ReplicaSet my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005
Dec 22 12:54:41.526: INFO: Pod name my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005: Found 0 pods out of 1
Dec 22 12:54:48.382: INFO: Pod name my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005: Found 1 pods out of 1
Dec 22 12:54:48.382: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005" is running
Dec 22 12:55:06.729: INFO: Pod "my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005-6w9j5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 12:54:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 12:54:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 12:54:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-22 12:54:41 +0000 UTC Reason: Message:}])
Dec 22 12:55:06.730: INFO: Trying to dial the pod
Dec 22 12:55:11.797: INFO: Controller my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005: Got expected result from replica 1 [my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005-6w9j5]: "my-hostname-basic-38483a6e-24ba-11ea-b023-0242ac110005-6w9j5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:55:11.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-9j2z7" for this suite.
Dec 22 12:55:22.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:55:23.250: INFO: namespace: e2e-tests-replicaset-9j2z7, resource: bindings, ignored listing per whitelist
Dec 22 12:55:23.314: INFO: namespace e2e-tests-replicaset-9j2z7 deletion completed in 11.505511753s

• [SLOW TEST:42.081 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:55:23.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 22 12:55:23.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:26.896: INFO: stderr: ""
Dec 22 12:55:26.896: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 12:55:26.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:27.165: INFO: stderr: ""
Dec 22 12:55:27.166: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Dec 22 12:55:32.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:32.528: INFO: stderr: ""
Dec 22 12:55:32.528: INFO: stdout: "update-demo-nautilus-ljkzd update-demo-nautilus-tst2j "
Dec 22 12:55:32.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljkzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:32.723: INFO: stderr: ""
Dec 22 12:55:32.723: INFO: stdout: ""
Dec 22 12:55:32.723: INFO: update-demo-nautilus-ljkzd is created but not running
Dec 22 12:55:37.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:37.991: INFO: stderr: ""
Dec 22 12:55:37.991: INFO: stdout: "update-demo-nautilus-ljkzd update-demo-nautilus-tst2j "
Dec 22 12:55:37.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljkzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:38.420: INFO: stderr: ""
Dec 22 12:55:38.420: INFO: stdout: ""
Dec 22 12:55:38.420: INFO: update-demo-nautilus-ljkzd is created but not running
Dec 22 12:55:43.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:43.676: INFO: stderr: ""
Dec 22 12:55:43.676: INFO: stdout: "update-demo-nautilus-ljkzd update-demo-nautilus-tst2j "
Dec 22 12:55:43.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljkzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:43.798: INFO: stderr: ""
Dec 22 12:55:43.799: INFO: stdout: "true"
Dec 22 12:55:43.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljkzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:44.110: INFO: stderr: ""
Dec 22 12:55:44.110: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 12:55:44.110: INFO: validating pod update-demo-nautilus-ljkzd
Dec 22 12:55:44.163: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 12:55:44.163: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 12:55:44.163: INFO: update-demo-nautilus-ljkzd is verified up and running
Dec 22 12:55:44.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tst2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:44.297: INFO: stderr: ""
Dec 22 12:55:44.297: INFO: stdout: "true"
Dec 22 12:55:44.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tst2j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:55:44.386: INFO: stderr: ""
Dec 22 12:55:44.386: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 12:55:44.386: INFO: validating pod update-demo-nautilus-tst2j
Dec 22 12:55:44.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 12:55:44.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 12:55:44.398: INFO: update-demo-nautilus-tst2j is verified up and running
STEP: rolling-update to new replication controller
Dec 22 12:55:44.400: INFO: scanned /root for discovery docs: 
Dec 22 12:55:44.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:26.457: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 22 12:56:26.457: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 12:56:26.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:26.722: INFO: stderr: ""
Dec 22 12:56:26.722: INFO: stdout: "update-demo-kitten-bgzvc update-demo-kitten-nzg2d "
Dec 22 12:56:26.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bgzvc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:26.905: INFO: stderr: ""
Dec 22 12:56:26.905: INFO: stdout: "true"
Dec 22 12:56:26.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bgzvc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:27.044: INFO: stderr: ""
Dec 22 12:56:27.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 22 12:56:27.044: INFO: validating pod update-demo-kitten-bgzvc
Dec 22 12:56:27.071: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 22 12:56:27.071: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 22 12:56:27.071: INFO: update-demo-kitten-bgzvc is verified up and running
Dec 22 12:56:27.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nzg2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:27.205: INFO: stderr: ""
Dec 22 12:56:27.206: INFO: stdout: "true"
Dec 22 12:56:27.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nzg2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fcll'
Dec 22 12:56:27.374: INFO: stderr: ""
Dec 22 12:56:27.374: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 22 12:56:27.374: INFO: validating pod update-demo-kitten-nzg2d
Dec 22 12:56:27.384: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 22 12:56:27.384: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 22 12:56:27.384: INFO: update-demo-kitten-nzg2d is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:56:27.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8fcll" for this suite.
Dec 22 12:57:07.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:57:07.951: INFO: namespace: e2e-tests-kubectl-8fcll, resource: bindings, ignored listing per whitelist
Dec 22 12:57:08.025: INFO: namespace e2e-tests-kubectl-8fcll deletion completed in 40.635935711s

• [SLOW TEST:104.710 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:57:08.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 22 12:57:34.753: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:34.766: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:36.767: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:36.842: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:38.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:38.813: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:40.767: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:40.854: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:42.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:42.787: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:44.767: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:44.781: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:46.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:46.785: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:48.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:48.778: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:50.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:50.784: INFO: Pod pod-with-poststart-http-hook still exists
Dec 22 12:57:52.767: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 22 12:57:52.784: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:57:52.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ff58v" for this suite.
Dec 22 12:58:18.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:58:18.932: INFO: namespace: e2e-tests-container-lifecycle-hook-ff58v, resource: bindings, ignored listing per whitelist
Dec 22 12:58:19.125: INFO: namespace e2e-tests-container-lifecycle-hook-ff58v deletion completed in 26.334138218s

• [SLOW TEST:71.100 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:58:19.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 22 12:58:33.634: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ba428471-24ba-11ea-b023-0242ac110005,GenerateName:,Namespace:e2e-tests-events-4l4mq,SelfLink:/api/v1/namespaces/e2e-tests-events-4l4mq/pods/send-events-ba428471-24ba-11ea-b023-0242ac110005,UID:ba4530cd-24ba-11ea-a994-fa163e34d433,ResourceVersion:15683874,Generation:0,CreationTimestamp:2019-12-22 12:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 566034291,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l5h89 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l5h89,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-l5h89 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000baff80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000baffa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:58:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:58:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:58:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 12:58:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-22 12:58:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-22 12:58:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://be1665fd4a43b9e094b87ee6cfca17e0d5cbf4158585ccc54700554db22b7929}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 22 12:58:35.660: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 22 12:58:37.676: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:58:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-4l4mq" for this suite.
Dec 22 12:59:25.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:59:25.907: INFO: namespace: e2e-tests-events-4l4mq, resource: bindings, ignored listing per whitelist
Dec 22 12:59:25.997: INFO: namespace e2e-tests-events-4l4mq deletion completed in 48.276297105s

• [SLOW TEST:66.870 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:59:25.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e1fd05fc-24ba-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 12:59:26.266: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-hp9z4" to be "success or failure"
Dec 22 12:59:26.369: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.466179ms
Dec 22 12:59:28.386: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120189096s
Dec 22 12:59:30.402: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13605776s
Dec 22 12:59:32.425: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15854437s
Dec 22 12:59:34.538: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271778801s
Dec 22 12:59:36.799: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.533429684s
Dec 22 12:59:40.027: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.761128981s
STEP: Saw pod success
Dec 22 12:59:40.028: INFO: Pod "pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 12:59:40.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 22 12:59:41.016: INFO: Waiting for pod pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005 to disappear
Dec 22 12:59:41.032: INFO: Pod pod-projected-configmaps-e1ff982d-24ba-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:59:41.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hp9z4" for this suite.
Dec 22 12:59:47.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 12:59:47.156: INFO: namespace: e2e-tests-projected-hp9z4, resource: bindings, ignored listing per whitelist
Dec 22 12:59:47.296: INFO: namespace e2e-tests-projected-hp9z4 deletion completed in 6.256702632s

• [SLOW TEST:21.299 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 12:59:47.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 12:59:59.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-nknsn" for this suite.
Dec 22 13:00:44.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:00:45.051: INFO: namespace: e2e-tests-kubelet-test-nknsn, resource: bindings, ignored listing per whitelist
Dec 22 13:00:45.220: INFO: namespace e2e-tests-kubelet-test-nknsn deletion completed in 45.529313s

• [SLOW TEST:57.923 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:00:45.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 22 13:00:45.394: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:01:05.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6ss9x" for this suite.
Dec 22 13:01:14.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:01:14.264: INFO: namespace: e2e-tests-init-container-6ss9x, resource: bindings, ignored listing per whitelist
Dec 22 13:01:14.371: INFO: namespace e2e-tests-init-container-6ss9x deletion completed in 8.343082532s

• [SLOW TEST:29.151 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:01:14.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 13:01:14.759: INFO: Creating deployment "test-recreate-deployment"
Dec 22 13:01:14.836: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 22 13:01:14.866: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 22 13:01:17.429: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 22 13:01:17.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616474, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:01:19.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616474, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:01:21.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616474, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:01:24.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616474, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:01:25.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616475, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712616474, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 22 13:01:27.455: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 22 13:01:27.473: INFO: Updating deployment test-recreate-deployment
Dec 22 13:01:27.474: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 22 13:01:29.958: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-p9fq8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9fq8/deployments/test-recreate-deployment,UID:22afd1fd-24bb-11ea-a994-fa163e34d433,ResourceVersion:15684214,Generation:2,CreationTimestamp:2019-12-22 13:01:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-22 13:01:28 +0000 UTC 2019-12-22 13:01:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-22 13:01:28 +0000 UTC 2019-12-22 13:01:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 22 13:01:29.971: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-p9fq8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9fq8/replicasets/test-recreate-deployment-589c4bfd,UID:2a742015-24bb-11ea-a994-fa163e34d433,ResourceVersion:15684212,Generation:1,CreationTimestamp:2019-12-22 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 22afd1fd-24bb-11ea-a994-fa163e34d433 0xc00265224f 0xc002652260}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:01:29.971: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 22 13:01:29.972: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-p9fq8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p9fq8/replicasets/test-recreate-deployment-5bf7f65dc,UID:22bfb58b-24bb-11ea-a994-fa163e34d433,ResourceVersion:15684202,Generation:2,CreationTimestamp:2019-12-22 13:01:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 22afd1fd-24bb-11ea-a994-fa163e34d433 0xc0026525d0 0xc0026525d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 22 13:01:29.980: INFO: Pod "test-recreate-deployment-589c4bfd-xwrls" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-xwrls,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-p9fq8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p9fq8/pods/test-recreate-deployment-589c4bfd-xwrls,UID:2a767e23-24bb-11ea-a994-fa163e34d433,ResourceVersion:15684216,Generation:0,CreationTimestamp:2019-12-22 13:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 2a742015-24bb-11ea-a994-fa163e34d433 0xc0018f6f0f 0xc0018f6f20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9qdmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qdmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-9qdmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0018f6f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0018f6fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:01:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:01:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:01:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-22 13:01:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-22 13:01:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:01:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-p9fq8" for this suite.
Dec 22 13:01:42.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:01:42.493: INFO: namespace: e2e-tests-deployment-p9fq8, resource: bindings, ignored listing per whitelist
Dec 22 13:01:42.794: INFO: namespace e2e-tests-deployment-p9fq8 deletion completed in 12.809799695s

• [SLOW TEST:28.422 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:01:42.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:01:43.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-c26tk" to be "success or failure"
Dec 22 13:01:43.401: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 355.764173ms
Dec 22 13:01:45.751: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706365865s
Dec 22 13:01:47.773: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728334442s
Dec 22 13:01:49.922: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877444019s
Dec 22 13:01:51.974: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929094966s
Dec 22 13:01:53.991: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.946116648s
STEP: Saw pod success
Dec 22 13:01:53.991: INFO: Pod "downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:01:53.999: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 13:01:55.150: INFO: Waiting for pod downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005 to disappear
Dec 22 13:01:55.485: INFO: Pod downwardapi-volume-33882168-24bb-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:01:55.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c26tk" for this suite.
Dec 22 13:02:01.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:02:01.775: INFO: namespace: e2e-tests-downward-api-c26tk, resource: bindings, ignored listing per whitelist
Dec 22 13:02:01.987: INFO: namespace e2e-tests-downward-api-c26tk deletion completed in 6.48038363s

• [SLOW TEST:19.193 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:02:01.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 22 13:02:02.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:02.735: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 22 13:02:02.735: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 22 13:02:02.754: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 22 13:02:02.896: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 22 13:02:02.929: INFO: scanned /root for discovery docs: 
Dec 22 13:02:02.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:37.148: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 22 13:02:37.148: INFO: stdout: "Created e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900\nScaling up e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 22 13:02:37.148: INFO: stdout: "Created e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900\nScaling up e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 22 13:02:37.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:37.287: INFO: stderr: ""
Dec 22 13:02:37.287: INFO: stdout: "e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh e2e-test-nginx-rc-tqfhc "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 22 13:02:42.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:42.424: INFO: stderr: ""
Dec 22 13:02:42.425: INFO: stdout: "e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh e2e-test-nginx-rc-tqfhc "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 22 13:02:47.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:47.613: INFO: stderr: ""
Dec 22 13:02:47.613: INFO: stdout: "e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh "
Dec 22 13:02:47.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:47.712: INFO: stderr: ""
Dec 22 13:02:47.713: INFO: stdout: "true"
Dec 22 13:02:47.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:47.831: INFO: stderr: ""
Dec 22 13:02:47.831: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 22 13:02:47.831: INFO: e2e-test-nginx-rc-3c615af7dfa6526fd59a861dd7eda900-vq7lh is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 22 13:02:47.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bzg7k'
Dec 22 13:02:48.000: INFO: stderr: ""
Dec 22 13:02:48.000: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:02:48.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bzg7k" for this suite.
Dec 22 13:03:12.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:12.356: INFO: namespace: e2e-tests-kubectl-bzg7k, resource: bindings, ignored listing per whitelist
Dec 22 13:03:12.368: INFO: namespace e2e-tests-kubectl-bzg7k deletion completed in 24.271993679s

• [SLOW TEST:70.380 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:03:12.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:03:12.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-88kwj" for this suite.
Dec 22 13:03:18.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:18.962: INFO: namespace: e2e-tests-services-88kwj, resource: bindings, ignored listing per whitelist
Dec 22 13:03:19.020: INFO: namespace e2e-tests-services-88kwj deletion completed in 6.288305141s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.652 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:03:19.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:03:19.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-g2pkp" to be "success or failure"
Dec 22 13:03:19.654: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.340497ms
Dec 22 13:03:22.249: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646734174s
Dec 22 13:03:24.288: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685016677s
Dec 22 13:03:26.305: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702190844s
Dec 22 13:03:29.412: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.808933782s
Dec 22 13:03:32.087: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.484087149s
Dec 22 13:03:34.104: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.50122509s
Dec 22 13:03:36.118: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.515583548s
STEP: Saw pod success
Dec 22 13:03:36.118: INFO: Pod "downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:03:36.123: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 13:03:36.927: INFO: Waiting for pod downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005 to disappear
Dec 22 13:03:37.178: INFO: Pod downwardapi-volume-6d14f592-24bb-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:03:37.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g2pkp" for this suite.
Dec 22 13:03:43.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:03:43.651: INFO: namespace: e2e-tests-projected-g2pkp, resource: bindings, ignored listing per whitelist
Dec 22 13:03:43.697: INFO: namespace e2e-tests-projected-g2pkp deletion completed in 6.49987821s

• [SLOW TEST:24.677 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:03:43.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7bbb3449-24bb-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 13:03:44.267: INFO: Waiting up to 5m0s for pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-rkp8l" to be "success or failure"
Dec 22 13:03:44.316: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.77342ms
Dec 22 13:03:46.495: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227337238s
Dec 22 13:03:48.569: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30135386s
Dec 22 13:03:50.609: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341785968s
Dec 22 13:03:52.870: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.603019865s
Dec 22 13:03:54.952: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684928449s
Dec 22 13:03:56.971: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.70390212s
Dec 22 13:03:59.004: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.736425749s
STEP: Saw pod success
Dec 22 13:03:59.004: INFO: Pod "pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:03:59.067: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:03:59.446: INFO: Waiting for pod pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005 to disappear
Dec 22 13:03:59.464: INFO: Pod pod-secrets-7bbe5cdc-24bb-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:03:59.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rkp8l" for this suite.
Dec 22 13:04:05.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:04:05.879: INFO: namespace: e2e-tests-secrets-rkp8l, resource: bindings, ignored listing per whitelist
Dec 22 13:04:05.942: INFO: namespace e2e-tests-secrets-rkp8l deletion completed in 6.383608194s

• [SLOW TEST:22.244 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:04:05.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:04:06.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-4jpgp" for this suite.
Dec 22 13:04:12.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:04:12.910: INFO: namespace: e2e-tests-kubelet-test-4jpgp, resource: bindings, ignored listing per whitelist
Dec 22 13:04:12.949: INFO: namespace e2e-tests-kubelet-test-4jpgp deletion completed in 6.537518756s

• [SLOW TEST:7.008 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:04:12.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 22 13:04:13.221: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 22 13:04:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:13.646: INFO: stderr: ""
Dec 22 13:04:13.646: INFO: stdout: "service/redis-slave created\n"
Dec 22 13:04:13.647: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 22 13:04:13.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:14.044: INFO: stderr: ""
Dec 22 13:04:14.044: INFO: stdout: "service/redis-master created\n"
Dec 22 13:04:14.046: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 22 13:04:14.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:14.581: INFO: stderr: ""
Dec 22 13:04:14.581: INFO: stdout: "service/frontend created\n"
Dec 22 13:04:14.583: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 22 13:04:14.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:14.922: INFO: stderr: ""
Dec 22 13:04:14.922: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 22 13:04:14.922: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 22 13:04:14.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:15.415: INFO: stderr: ""
Dec 22 13:04:15.416: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 22 13:04:15.418: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 22 13:04:15.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:04:16.118: INFO: stderr: ""
Dec 22 13:04:16.119: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 22 13:04:16.119: INFO: Waiting for all frontend pods to be Running.
Dec 22 13:05:01.223: INFO: Waiting for frontend to serve content.
Dec 22 13:05:01.514: INFO: Trying to add a new entry to the guestbook.
Dec 22 13:05:01.589: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 22 13:05:01.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:02.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:02.225: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 13:05:02.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:02.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:02.520: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 13:05:02.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:02.676: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:02.676: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 13:05:02.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:02.862: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:02.862: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 13:05:02.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:03.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:03.327: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 22 13:05:03.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbhvf'
Dec 22 13:05:03.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:05:03.710: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:05:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hbhvf" for this suite.
Dec 22 13:05:55.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:05:56.068: INFO: namespace: e2e-tests-kubectl-hbhvf, resource: bindings, ignored listing per whitelist
Dec 22 13:05:56.095: INFO: namespace e2e-tests-kubectl-hbhvf deletion completed in 52.293187524s

• [SLOW TEST:103.144 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:05:56.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 22 13:05:56.449: INFO: Waiting up to 5m0s for pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005" in namespace "e2e-tests-var-expansion-pdvkg" to be "success or failure"
Dec 22 13:05:56.461: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.831549ms
Dec 22 13:05:58.924: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47504398s
Dec 22 13:06:01.003: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.554241617s
Dec 22 13:06:03.042: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593588212s
Dec 22 13:06:06.871: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.422416339s
Dec 22 13:06:08.900: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.451355599s
Dec 22 13:06:10.915: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.466073653s
Dec 22 13:06:12.952: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.503584199s
STEP: Saw pod success
Dec 22 13:06:12.953: INFO: Pod "var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:06:12.997: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 22 13:06:13.245: INFO: Waiting for pod var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005 to disappear
Dec 22 13:06:13.259: INFO: Pod var-expansion-ca7d5d3a-24bb-11ea-b023-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:06:13.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-pdvkg" for this suite.
Dec 22 13:06:19.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:06:19.517: INFO: namespace: e2e-tests-var-expansion-pdvkg, resource: bindings, ignored listing per whitelist
Dec 22 13:06:19.534: INFO: namespace e2e-tests-var-expansion-pdvkg deletion completed in 6.263867001s

• [SLOW TEST:23.437 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:06:19.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:07:19.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-sw72p" for this suite.
Dec 22 13:07:43.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:07:44.076: INFO: namespace: e2e-tests-container-probe-sw72p, resource: bindings, ignored listing per whitelist
Dec 22 13:07:44.150: INFO: namespace e2e-tests-container-probe-sw72p deletion completed in 24.335450887s

• [SLOW TEST:84.616 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:07:44.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0ae6aa27-24bc-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 13:07:44.379: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-k5tg9" to be "success or failure"
Dec 22 13:07:44.387: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343336ms
Dec 22 13:07:46.397: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01818594s
Dec 22 13:07:48.417: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037941124s
Dec 22 13:07:51.171: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.791806001s
Dec 22 13:07:53.200: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820972315s
Dec 22 13:07:55.231: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.852212054s
STEP: Saw pod success
Dec 22 13:07:55.232: INFO: Pod "pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:07:55.241: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 22 13:07:55.556: INFO: Waiting for pod pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005 to disappear
Dec 22 13:07:55.569: INFO: Pod pod-projected-secrets-0ae7e8ad-24bc-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:07:55.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k5tg9" for this suite.
Dec 22 13:08:01.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:08:02.096: INFO: namespace: e2e-tests-projected-k5tg9, resource: bindings, ignored listing per whitelist
Dec 22 13:08:02.100: INFO: namespace e2e-tests-projected-k5tg9 deletion completed in 6.50433065s

• [SLOW TEST:17.950 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:08:02.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-15c69766-24bc-11ea-b023-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-15c698be-24bc-11ea-b023-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-15c69766-24bc-11ea-b023-0242ac110005
STEP: Updating configmap cm-test-opt-upd-15c698be-24bc-11ea-b023-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-15c698ff-24bc-11ea-b023-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:08:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n7gmj" for this suite.
Dec 22 13:08:45.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:08:45.372: INFO: namespace: e2e-tests-configmap-n7gmj, resource: bindings, ignored listing per whitelist
Dec 22 13:08:45.497: INFO: namespace e2e-tests-configmap-n7gmj deletion completed in 24.307491112s

• [SLOW TEST:43.396 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:08:45.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-l7zpp
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 22 13:08:46.051: INFO: Found 0 stateful pods, waiting for 3
Dec 22 13:08:56.212: INFO: Found 2 stateful pods, waiting for 3
Dec 22 13:09:06.067: INFO: Found 2 stateful pods, waiting for 3
Dec 22 13:09:16.078: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:09:16.078: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:09:16.078: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 13:09:26.078: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:09:26.078: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:09:26.078: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 22 13:09:26.123: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 22 13:09:36.225: INFO: Updating stateful set ss2
Dec 22 13:09:36.249: INFO: Waiting for Pod e2e-tests-statefulset-l7zpp/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 22 13:09:46.946: INFO: Found 2 stateful pods, waiting for 3
Dec 22 13:09:58.101: INFO: Found 2 stateful pods, waiting for 3
Dec 22 13:10:06.963: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:10:06.963: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:10:06.963: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 22 13:10:16.978: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:10:16.978: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 22 13:10:16.978: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 22 13:10:17.012: INFO: Updating stateful set ss2
Dec 22 13:10:17.047: INFO: Waiting for Pod e2e-tests-statefulset-l7zpp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 13:10:27.109: INFO: Updating stateful set ss2
Dec 22 13:10:27.135: INFO: Waiting for StatefulSet e2e-tests-statefulset-l7zpp/ss2 to complete update
Dec 22 13:10:27.135: INFO: Waiting for Pod e2e-tests-statefulset-l7zpp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 13:10:37.186: INFO: Waiting for StatefulSet e2e-tests-statefulset-l7zpp/ss2 to complete update
Dec 22 13:10:37.187: INFO: Waiting for Pod e2e-tests-statefulset-l7zpp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 22 13:10:48.479: INFO: Waiting for StatefulSet e2e-tests-statefulset-l7zpp/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 22 13:10:57.165: INFO: Deleting all statefulset in ns e2e-tests-statefulset-l7zpp
Dec 22 13:10:57.170: INFO: Scaling statefulset ss2 to 0
Dec 22 13:11:37.226: INFO: Waiting for statefulset status.replicas updated to 0
Dec 22 13:11:37.242: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:11:37.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-l7zpp" for this suite.
Dec 22 13:11:45.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:11:45.593: INFO: namespace: e2e-tests-statefulset-l7zpp, resource: bindings, ignored listing per whitelist
Dec 22 13:11:45.610: INFO: namespace e2e-tests-statefulset-l7zpp deletion completed in 8.242840432s

• [SLOW TEST:180.113 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:11:45.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 22 13:11:45.796: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685682,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:11:45.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685682,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 22 13:11:55.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685695,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 22 13:11:55.857: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685695,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 22 13:12:05.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685708,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:12:05.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685708,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 22 13:12:15.910: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685720,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 22 13:12:15.911: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-a,UID:9ace893f-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685720,Generation:0,CreationTimestamp:2019-12-22 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 22 13:12:25.950: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-b,UID:b2bae748-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685733,Generation:0,CreationTimestamp:2019-12-22 13:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:12:25.950: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-b,UID:b2bae748-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685733,Generation:0,CreationTimestamp:2019-12-22 13:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 22 13:12:35.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-b,UID:b2bae748-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685746,Generation:0,CreationTimestamp:2019-12-22 13:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 22 13:12:35.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mt7kp,SelfLink:/api/v1/namespaces/e2e-tests-watch-mt7kp/configmaps/e2e-watch-test-configmap-b,UID:b2bae748-24bc-11ea-a994-fa163e34d433,ResourceVersion:15685746,Generation:0,CreationTimestamp:2019-12-22 13:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:12:46.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-mt7kp" for this suite.
Dec 22 13:12:52.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:12:52.164: INFO: namespace: e2e-tests-watch-mt7kp, resource: bindings, ignored listing per whitelist
Dec 22 13:12:52.266: INFO: namespace e2e-tests-watch-mt7kp deletion completed in 6.25197136s

• [SLOW TEST:66.656 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:12:52.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 22 13:12:52.745: INFO: Waiting up to 5m0s for pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005" in namespace "e2e-tests-downward-api-thn2f" to be "success or failure"
Dec 22 13:12:52.759: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.1465ms
Dec 22 13:12:55.538: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793606663s
Dec 22 13:12:57.555: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8101854s
Dec 22 13:12:59.569: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.824232166s
Dec 22 13:13:01.589: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844039736s
Dec 22 13:13:04.239: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.494415915s
Dec 22 13:13:06.757: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.01221829s
Dec 22 13:13:08.778: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.033233094s
Dec 22 13:13:11.001: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.256702752s
Dec 22 13:13:13.035: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.290522453s
STEP: Saw pod success
Dec 22 13:13:13.036: INFO: Pod "downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:13:13.044: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 22 13:13:13.114: INFO: Waiting for pod downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005 to disappear
Dec 22 13:13:13.123: INFO: Pod downward-api-c2b2acc4-24bc-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:13:13.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-thn2f" for this suite.
Dec 22 13:13:19.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:13:19.379: INFO: namespace: e2e-tests-downward-api-thn2f, resource: bindings, ignored listing per whitelist
Dec 22 13:13:19.439: INFO: namespace e2e-tests-downward-api-thn2f deletion completed in 6.297276488s

• [SLOW TEST:27.173 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:13:19.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-d2c07dd4-24bc-11ea-b023-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-d2c07dd4-24bc-11ea-b023-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:13:34.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jzbp5" for this suite.
Dec 22 13:13:58.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:13:58.331: INFO: namespace: e2e-tests-configmap-jzbp5, resource: bindings, ignored listing per whitelist
Dec 22 13:13:58.370: INFO: namespace e2e-tests-configmap-jzbp5 deletion completed in 24.313516844s

• [SLOW TEST:38.930 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:13:58.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e9fb3087-24bc-11ea-b023-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 22 13:13:58.758: INFO: Waiting up to 5m0s for pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005" in namespace "e2e-tests-secrets-5m9r4" to be "success or failure"
Dec 22 13:13:58.770: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348192ms
Dec 22 13:14:01.365: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.606487273s
Dec 22 13:14:03.548: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789887023s
Dec 22 13:14:05.559: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.801007618s
Dec 22 13:14:07.916: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.157869631s
Dec 22 13:14:09.934: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.175582056s
Dec 22 13:14:11.948: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.189667642s
Dec 22 13:14:13.963: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.204924038s
STEP: Saw pod success
Dec 22 13:14:13.963: INFO: Pod "pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:14:13.969: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 22 13:14:14.301: INFO: Waiting for pod pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005 to disappear
Dec 22 13:14:14.318: INFO: Pod pod-secrets-ea032fbe-24bc-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:14:14.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5m9r4" for this suite.
Dec 22 13:14:20.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:14:20.532: INFO: namespace: e2e-tests-secrets-5m9r4, resource: bindings, ignored listing per whitelist
Dec 22 13:14:20.687: INFO: namespace e2e-tests-secrets-5m9r4 deletion completed in 6.357669539s

• [SLOW TEST:22.316 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:14:20.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:14:21.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-w5rr4" to be "success or failure"
Dec 22 13:14:21.171: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.431371ms
Dec 22 13:14:23.803: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.795186418s
Dec 22 13:14:25.821: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813137067s
Dec 22 13:14:28.641: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.6328556s
Dec 22 13:14:30.667: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.658678475s
Dec 22 13:14:32.757: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.749558887s
Dec 22 13:14:36.867: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.859425721s
STEP: Saw pod success
Dec 22 13:14:36.868: INFO: Pod "downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:14:36.893: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 13:14:37.701: INFO: Waiting for pod downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005 to disappear
Dec 22 13:14:37.863: INFO: Pod downwardapi-volume-f750ee7f-24bc-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:14:37.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w5rr4" for this suite.
Dec 22 13:14:46.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:14:46.123: INFO: namespace: e2e-tests-projected-w5rr4, resource: bindings, ignored listing per whitelist
Dec 22 13:14:46.226: INFO: namespace e2e-tests-projected-w5rr4 deletion completed in 8.314737805s

• [SLOW TEST:25.538 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:14:46.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 13:14:46.751: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"06927d4b-24bd-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0024c2662), BlockOwnerDeletion:(*bool)(0xc0024c2663)}}
Dec 22 13:14:46.922: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0688b160-24bd-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0027b1472), BlockOwnerDeletion:(*bool)(0xc0027b1473)}}
Dec 22 13:14:46.961: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"068ec0b0-24bd-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0026efff2), BlockOwnerDeletion:(*bool)(0xc0026efff3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:14:52.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-w2k6d" for this suite.
Dec 22 13:15:02.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:15:02.417: INFO: namespace: e2e-tests-gc-w2k6d, resource: bindings, ignored listing per whitelist
Dec 22 13:15:02.655: INFO: namespace e2e-tests-gc-w2k6d deletion completed in 10.549891686s

• [SLOW TEST:16.429 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:15:02.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 22 13:15:03.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005" in namespace "e2e-tests-projected-lhvfx" to be "success or failure"
Dec 22 13:15:03.196: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.836329ms
Dec 22 13:15:05.790: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.610168322s
Dec 22 13:15:07.815: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.635972572s
Dec 22 13:15:09.841: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661729793s
Dec 22 13:15:12.083: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904014376s
Dec 22 13:15:14.098: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.918719373s
Dec 22 13:15:16.631: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.451290132s
STEP: Saw pod success
Dec 22 13:15:16.631: INFO: Pod "downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:15:16.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005 container client-container: 
STEP: delete the pod
Dec 22 13:15:17.034: INFO: Waiting for pod downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005 to disappear
Dec 22 13:15:17.043: INFO: Pod downwardapi-volume-10724f73-24bd-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:15:17.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lhvfx" for this suite.
Dec 22 13:15:23.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:15:23.439: INFO: namespace: e2e-tests-projected-lhvfx, resource: bindings, ignored listing per whitelist
Dec 22 13:15:23.466: INFO: namespace e2e-tests-projected-lhvfx deletion completed in 6.381192318s

• [SLOW TEST:20.811 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:15:23.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 22 13:15:38.800: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:15:41.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-2n4xw" for this suite.
Dec 22 13:16:13.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:16:13.772: INFO: namespace: e2e-tests-replicaset-2n4xw, resource: bindings, ignored listing per whitelist
Dec 22 13:16:13.773: INFO: namespace e2e-tests-replicaset-2n4xw deletion completed in 32.6188018s

• [SLOW TEST:50.306 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:16:13.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3aad08bf-24bd-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 13:16:14.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-f6s6c" to be "success or failure"
Dec 22 13:16:14.180: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.552365ms
Dec 22 13:16:16.631: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457913587s
Dec 22 13:16:18.648: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475438807s
Dec 22 13:16:20.838: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664785037s
Dec 22 13:16:22.851: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678570209s
Dec 22 13:16:24.866: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.693122794s
Dec 22 13:16:27.397: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.224495373s
STEP: Saw pod success
Dec 22 13:16:27.398: INFO: Pod "pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:16:27.784: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 22 13:16:27.914: INFO: Waiting for pod pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005 to disappear
Dec 22 13:16:27.933: INFO: Pod pod-configmaps-3abd3221-24bd-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:16:27.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f6s6c" for this suite.
Dec 22 13:16:33.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:16:34.022: INFO: namespace: e2e-tests-configmap-f6s6c, resource: bindings, ignored listing per whitelist
Dec 22 13:16:34.191: INFO: namespace e2e-tests-configmap-f6s6c deletion completed in 6.250632731s

• [SLOW TEST:20.416 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:16:34.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 22 13:16:34.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:36.662: INFO: stderr: ""
Dec 22 13:16:36.662: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 22 13:16:36.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:36.931: INFO: stderr: ""
Dec 22 13:16:36.932: INFO: stdout: "update-demo-nautilus-47crw update-demo-nautilus-jlqtz "
Dec 22 13:16:36.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47crw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:37.147: INFO: stderr: ""
Dec 22 13:16:37.148: INFO: stdout: ""
Dec 22 13:16:37.148: INFO: update-demo-nautilus-47crw is created but not running
Dec 22 13:16:42.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:42.409: INFO: stderr: ""
Dec 22 13:16:42.409: INFO: stdout: "update-demo-nautilus-47crw update-demo-nautilus-jlqtz "
Dec 22 13:16:42.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47crw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:42.638: INFO: stderr: ""
Dec 22 13:16:42.638: INFO: stdout: ""
Dec 22 13:16:42.638: INFO: update-demo-nautilus-47crw is created but not running
Dec 22 13:16:47.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:48.139: INFO: stderr: ""
Dec 22 13:16:48.139: INFO: stdout: "update-demo-nautilus-47crw update-demo-nautilus-jlqtz "
Dec 22 13:16:48.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47crw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:48.311: INFO: stderr: ""
Dec 22 13:16:48.311: INFO: stdout: ""
Dec 22 13:16:48.311: INFO: update-demo-nautilus-47crw is created but not running
Dec 22 13:16:53.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:53.509: INFO: stderr: ""
Dec 22 13:16:53.510: INFO: stdout: "update-demo-nautilus-47crw update-demo-nautilus-jlqtz "
Dec 22 13:16:53.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47crw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:53.694: INFO: stderr: ""
Dec 22 13:16:53.695: INFO: stdout: "true"
Dec 22 13:16:53.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47crw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:53.877: INFO: stderr: ""
Dec 22 13:16:53.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 13:16:53.878: INFO: validating pod update-demo-nautilus-47crw
Dec 22 13:16:53.906: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 13:16:53.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 13:16:53.906: INFO: update-demo-nautilus-47crw is verified up and running
Dec 22 13:16:53.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jlqtz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:54.048: INFO: stderr: ""
Dec 22 13:16:54.048: INFO: stdout: "true"
Dec 22 13:16:54.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jlqtz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:54.154: INFO: stderr: ""
Dec 22 13:16:54.154: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 22 13:16:54.154: INFO: validating pod update-demo-nautilus-jlqtz
Dec 22 13:16:54.166: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 22 13:16:54.166: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 22 13:16:54.166: INFO: update-demo-nautilus-jlqtz is verified up and running
STEP: using delete to clean up resources
Dec 22 13:16:54.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:54.333: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 22 13:16:54.333: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 22 13:16:54.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cfp9h'
Dec 22 13:16:54.440: INFO: stderr: "No resources found.\n"
Dec 22 13:16:54.440: INFO: stdout: ""
Dec 22 13:16:54.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cfp9h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 22 13:16:54.566: INFO: stderr: ""
Dec 22 13:16:54.566: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:16:54.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cfp9h" for this suite.
Dec 22 13:17:18.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:17:18.705: INFO: namespace: e2e-tests-kubectl-cfp9h, resource: bindings, ignored listing per whitelist
Dec 22 13:17:18.728: INFO: namespace e2e-tests-kubectl-cfp9h deletion completed in 24.155170786s

• [SLOW TEST:44.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:17:18.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 22 13:17:19.110: INFO: Number of nodes with available pods: 0
Dec 22 13:17:19.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:20.136: INFO: Number of nodes with available pods: 0
Dec 22 13:17:20.136: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:21.147: INFO: Number of nodes with available pods: 0
Dec 22 13:17:21.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:22.629: INFO: Number of nodes with available pods: 0
Dec 22 13:17:22.629: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:23.134: INFO: Number of nodes with available pods: 0
Dec 22 13:17:23.134: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:24.151: INFO: Number of nodes with available pods: 0
Dec 22 13:17:24.152: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:25.490: INFO: Number of nodes with available pods: 0
Dec 22 13:17:25.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:26.414: INFO: Number of nodes with available pods: 0
Dec 22 13:17:26.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:28.083: INFO: Number of nodes with available pods: 0
Dec 22 13:17:28.083: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:28.123: INFO: Number of nodes with available pods: 0
Dec 22 13:17:28.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:29.148: INFO: Number of nodes with available pods: 0
Dec 22 13:17:29.148: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:30.147: INFO: Number of nodes with available pods: 0
Dec 22 13:17:30.148: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:31.155: INFO: Number of nodes with available pods: 1
Dec 22 13:17:31.155: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 22 13:17:31.331: INFO: Number of nodes with available pods: 0
Dec 22 13:17:31.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:32.352: INFO: Number of nodes with available pods: 0
Dec 22 13:17:32.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:33.566: INFO: Number of nodes with available pods: 0
Dec 22 13:17:33.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:34.562: INFO: Number of nodes with available pods: 0
Dec 22 13:17:34.562: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:36.181: INFO: Number of nodes with available pods: 0
Dec 22 13:17:36.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:36.398: INFO: Number of nodes with available pods: 0
Dec 22 13:17:36.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:37.410: INFO: Number of nodes with available pods: 0
Dec 22 13:17:37.410: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:39.816: INFO: Number of nodes with available pods: 0
Dec 22 13:17:39.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:40.361: INFO: Number of nodes with available pods: 0
Dec 22 13:17:40.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:41.396: INFO: Number of nodes with available pods: 0
Dec 22 13:17:41.397: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:42.356: INFO: Number of nodes with available pods: 0
Dec 22 13:17:42.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:43.361: INFO: Number of nodes with available pods: 0
Dec 22 13:17:43.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 22 13:17:44.358: INFO: Number of nodes with available pods: 1
Dec 22 13:17:44.358: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tzbls, will wait for the garbage collector to delete the pods
Dec 22 13:17:44.476: INFO: Deleting DaemonSet.extensions daemon-set took: 27.762894ms
Dec 22 13:17:44.677: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.604321ms
Dec 22 13:18:02.808: INFO: Number of nodes with available pods: 0
Dec 22 13:18:02.808: INFO: Number of running nodes: 0, number of available pods: 0
Dec 22 13:18:02.816: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tzbls/daemonsets","resourceVersion":"15686440"},"items":null}

Dec 22 13:18:02.826: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tzbls/pods","resourceVersion":"15686440"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:18:02.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-tzbls" for this suite.
Dec 22 13:18:10.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:18:11.046: INFO: namespace: e2e-tests-daemonsets-tzbls, resource: bindings, ignored listing per whitelist
Dec 22 13:18:11.101: INFO: namespace e2e-tests-daemonsets-tzbls deletion completed in 8.188009359s

• [SLOW TEST:52.373 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:18:11.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 22 13:18:11.308: INFO: Waiting up to 5m0s for pod "pod-80950e1f-24bd-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-vvm9s" to be "success or failure"
Dec 22 13:18:11.317: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046549ms
Dec 22 13:18:13.776: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467937011s
Dec 22 13:18:15.794: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484966334s
Dec 22 13:18:17.888: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579485359s
Dec 22 13:18:19.906: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.59794729s
Dec 22 13:18:22.337: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.02807344s
Dec 22 13:18:24.356: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.047834468s
STEP: Saw pod success
Dec 22 13:18:24.357: INFO: Pod "pod-80950e1f-24bd-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:18:24.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-80950e1f-24bd-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 13:18:24.840: INFO: Waiting for pod pod-80950e1f-24bd-11ea-b023-0242ac110005 to disappear
Dec 22 13:18:24.858: INFO: Pod pod-80950e1f-24bd-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:18:24.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vvm9s" for this suite.
Dec 22 13:18:30.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:18:31.139: INFO: namespace: e2e-tests-emptydir-vvm9s, resource: bindings, ignored listing per whitelist
Dec 22 13:18:31.202: INFO: namespace e2e-tests-emptydir-vvm9s deletion completed in 6.283088093s

• [SLOW TEST:20.101 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:18:31.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 22 13:18:31.667: INFO: Waiting up to 5m0s for pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005" in namespace "e2e-tests-emptydir-h5n69" to be "success or failure"
Dec 22 13:18:31.739: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.516072ms
Dec 22 13:18:33.875: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208252106s
Dec 22 13:18:35.902: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234959236s
Dec 22 13:18:38.821: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.153426122s
Dec 22 13:18:40.856: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.189158461s
Dec 22 13:18:42.901: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.233621484s
STEP: Saw pod success
Dec 22 13:18:42.901: INFO: Pod "pod-8cb4166f-24bd-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:18:42.936: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8cb4166f-24bd-11ea-b023-0242ac110005 container test-container: 
STEP: delete the pod
Dec 22 13:18:43.081: INFO: Waiting for pod pod-8cb4166f-24bd-11ea-b023-0242ac110005 to disappear
Dec 22 13:18:43.135: INFO: Pod pod-8cb4166f-24bd-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:18:43.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h5n69" for this suite.
Dec 22 13:18:49.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:18:49.267: INFO: namespace: e2e-tests-emptydir-h5n69, resource: bindings, ignored listing per whitelist
Dec 22 13:18:49.549: INFO: namespace e2e-tests-emptydir-h5n69 deletion completed in 6.40085152s

• [SLOW TEST:18.346 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:18:49.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c6lbx in namespace e2e-tests-proxy-mbz94
I1222 13:18:49.909724       8 runners.go:184] Created replication controller with name: proxy-service-c6lbx, namespace: e2e-tests-proxy-mbz94, replica count: 1
I1222 13:18:50.961141       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:51.961937       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:52.962598       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:53.963534       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:54.964350       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:55.964947       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:56.965471       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:57.967956       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:58.970028       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:18:59.971066       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1222 13:19:00.971706       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:01.972334       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:02.972896       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:03.973749       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:04.974394       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:05.974958       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:06.975596       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:07.976493       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1222 13:19:08.977167       8 runners.go:184] proxy-service-c6lbx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 22 13:19:08.986: INFO: setup took 19.235687041s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 22 13:19:09.021: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-mbz94/pods/proxy-service-c6lbx-qxcdg:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 22 13:19:59.148: INFO: Container started at 2019-12-22 13:19:37 +0000 UTC, pod became ready at 2019-12-22 13:19:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:19:59.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-2bjkt" for this suite.
Dec 22 13:20:23.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:20:23.335: INFO: namespace: e2e-tests-container-probe-2bjkt, resource: bindings, ignored listing per whitelist
Dec 22 13:20:23.413: INFO: namespace e2e-tests-container-probe-2bjkt deletion completed in 24.25172341s

• [SLOW TEST:54.492 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 22 13:20:23.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-p5729/configmap-test-cf6cadd7-24bd-11ea-b023-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 22 13:20:23.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005" in namespace "e2e-tests-configmap-p5729" to be "success or failure"
Dec 22 13:20:23.727: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.75525ms
Dec 22 13:20:26.173: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503205744s
Dec 22 13:20:28.191: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521845578s
Dec 22 13:20:30.371: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701393318s
Dec 22 13:20:32.466: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796634133s
Dec 22 13:20:34.574: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.90432127s
STEP: Saw pod success
Dec 22 13:20:34.574: INFO: Pod "pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005" satisfied condition "success or failure"
Dec 22 13:20:34.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005 container env-test: 
STEP: delete the pod
Dec 22 13:20:34.776: INFO: Waiting for pod pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005 to disappear
Dec 22 13:20:34.794: INFO: Pod pod-configmaps-cf6dd5f6-24bd-11ea-b023-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 22 13:20:34.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p5729" for this suite.
Dec 22 13:20:40.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 22 13:20:40.978: INFO: namespace: e2e-tests-configmap-p5729, resource: bindings, ignored listing per whitelist
Dec 22 13:20:41.038: INFO: namespace e2e-tests-configmap-p5729 deletion completed in 6.23594799s

• [SLOW TEST:17.625 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSDec 22 13:20:41.039: INFO: Running AfterSuite actions on all nodes
Dec 22 13:20:41.039: INFO: Running AfterSuite actions on node 1
Dec 22 13:20:41.039: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9215.729 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS