I0131 10:47:17.482964 8 e2e.go:224] Starting e2e run "0be49316-4417-11ea-aae6-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580467636 - Will randomize all specs Will run 201 of 2164 specs Jan 31 10:47:18.196: INFO: >>> kubeConfig: /root/.kube/config Jan 31 10:47:18.203: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 31 10:47:18.230: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 31 10:47:18.276: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 31 10:47:18.276: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 31 10:47:18.276: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 31 10:47:18.286: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 31 10:47:18.286: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 31 10:47:18.286: INFO: e2e test version: v1.13.12 Jan 31 10:47:18.288: INFO: kube-apiserver version: v1.13.8 [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:47:18.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 31 10:47:18.720: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0d5b6e24-4417-11ea-aae6-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 31 10:47:18.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-rl6bc" to be "success or failure" Jan 31 10:47:18.747: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.186885ms Jan 31 10:47:20.792: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054126885s Jan 31 10:47:22.814: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075672329s Jan 31 10:47:24.834: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095691305s Jan 31 10:47:27.123: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384574123s Jan 31 10:47:29.158: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419336122s STEP: Saw pod success Jan 31 10:47:29.158: INFO: Pod "pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005" satisfied condition "success or failure" Jan 31 10:47:29.164: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 31 10:47:30.047: INFO: Waiting for pod pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005 to disappear Jan 31 10:47:30.072: INFO: Pod pod-configmaps-0d5c4304-4417-11ea-aae6-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 31 10:47:30.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rl6bc" for this suite. Jan 31 10:47:36.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 10:47:36.352: INFO: namespace: e2e-tests-configmap-rl6bc, resource: bindings, ignored listing per whitelist Jan 31 10:47:36.444: INFO: namespace e2e-tests-configmap-rl6bc deletion completed in 6.361946585s • [SLOW TEST:18.156 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:47:36.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 31 10:47:47.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-xfhvg" for this suite. Jan 31 10:48:09.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 10:48:10.066: INFO: namespace: e2e-tests-replication-controller-xfhvg, resource: bindings, ignored listing per whitelist Jan 31 10:48:10.109: INFO: namespace e2e-tests-replication-controller-xfhvg deletion completed in 22.208263392s • [SLOW TEST:33.664 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:48:10.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 31 10:48:11.340: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 31 10:48:11.366: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-fdb6m/daemonsets","resourceVersion":"20068716"},"items":null} Jan 31 10:48:11.375: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-fdb6m/pods","resourceVersion":"20068716"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 31 10:48:11.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-fdb6m" for this suite. Jan 31 10:48:19.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 10:48:19.675: INFO: namespace: e2e-tests-daemonsets-fdb6m, resource: bindings, ignored listing per whitelist Jan 31 10:48:19.743: INFO: namespace e2e-tests-daemonsets-fdb6m deletion completed in 8.266796051s S [SKIPPING] [9.634 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 31 10:48:11.340: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:48:19.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 31 10:48:20.002: INFO: Number of nodes with available pods: 0 Jan 31 10:48:20.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:21.026: INFO: Number of nodes with available pods: 0 Jan 31 10:48:21.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:22.047: INFO: Number of nodes with available pods: 0 Jan 31 10:48:22.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:23.026: INFO: Number of nodes with available pods: 0 Jan 31 10:48:23.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:24.067: INFO: Number of nodes with available pods: 0 Jan 31 10:48:24.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:25.455: INFO: Number of nodes with available pods: 0 Jan 31 10:48:25.456: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:26.064: INFO: Number of nodes with available pods: 0 Jan 31 10:48:26.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:27.124: INFO: Number of nodes with available pods: 0 Jan 31 10:48:27.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:28.062: INFO: Number of nodes with available pods: 0 Jan 31 10:48:28.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:29.109: INFO: Number of nodes with available pods: 1 Jan 31 10:48:29.110: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 31 10:48:29.357: INFO: Number of nodes with available pods: 0 Jan 31 10:48:29.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:30.943: INFO: Number of nodes with available pods: 0 Jan 31 10:48:30.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:31.387: INFO: Number of nodes with available pods: 0 Jan 31 10:48:31.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:33.023: INFO: Number of nodes with available pods: 0 Jan 31 10:48:33.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:33.380: INFO: Number of nodes with available pods: 0 Jan 31 10:48:33.380: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:34.927: INFO: Number of nodes with available pods: 0 Jan 31 10:48:34.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:35.395: INFO: Number of nodes with available pods: 0 Jan 31 10:48:35.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:36.401: INFO: Number of nodes with available pods: 0 Jan 31 10:48:36.402: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:37.801: INFO: Number of nodes with available pods: 0 Jan 31 10:48:37.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:38.541: INFO: Number of nodes with available pods: 0 Jan 31 10:48:38.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:39.488: INFO: Number of nodes with available pods: 0 Jan 31 10:48:39.488: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:40.454: INFO: Number of nodes with available pods: 0 Jan 31 10:48:40.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:41.402: INFO: Number of nodes with available pods: 0 Jan 31 10:48:41.402: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 31 10:48:42.410: INFO: Number of nodes with available pods: 1 Jan 31 10:48:42.411: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4hz7m, will wait for the garbage collector to delete the pods Jan 31 10:48:42.553: INFO: Deleting DaemonSet.extensions daemon-set took: 77.675236ms Jan 31 10:48:42.753: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.835187ms Jan 31 10:48:49.577: INFO: Number of nodes with available pods: 0 Jan 31 10:48:49.578: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 10:48:49.584: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4hz7m/daemonsets","resourceVersion":"20068809"},"items":null} Jan 31 10:48:49.588: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4hz7m/pods","resourceVersion":"20068809"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 31 10:48:49.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4hz7m" for this suite. Jan 31 10:48:55.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 10:48:55.690: INFO: namespace: e2e-tests-daemonsets-4hz7m, resource: bindings, ignored listing per whitelist Jan 31 10:48:55.795: INFO: namespace e2e-tests-daemonsets-4hz7m deletion completed in 6.188683172s • [SLOW TEST:36.051 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:48:55.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 31 10:48:55.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rc8hv' Jan 31 10:48:58.047: INFO: stderr: "" Jan 31 10:48:58.048: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 31 10:48:59.430: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:48:59.430: INFO: Found 0 / 1 Jan 31 10:49:00.206: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:00.206: INFO: Found 0 / 1 Jan 31 10:49:01.062: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:01.062: INFO: Found 0 / 1 Jan 31 10:49:02.084: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:02.085: INFO: Found 0 / 1 Jan 31 10:49:03.933: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:03.933: INFO: Found 0 / 1 Jan 31 10:49:04.383: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:04.384: INFO: Found 0 / 1 Jan 31 10:49:05.631: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:05.631: INFO: Found 0 / 1 Jan 31 10:49:06.067: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:06.067: INFO: Found 0 / 1 Jan 31 10:49:07.087: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:07.087: INFO: Found 0 / 1 Jan 31 10:49:08.067: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:08.067: INFO: Found 0 / 1 Jan 31 10:49:09.067: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:09.067: INFO: Found 1 / 1 Jan 31 10:49:09.067: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 31 10:49:09.073: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:09.073: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 10:49:09.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-gnxjq --namespace=e2e-tests-kubectl-rc8hv -p {"metadata":{"annotations":{"x":"y"}}}' Jan 31 10:49:09.297: INFO: stderr: "" Jan 31 10:49:09.297: INFO: stdout: "pod/redis-master-gnxjq patched\n" STEP: checking annotations Jan 31 10:49:09.348: INFO: Selector matched 1 pods for map[app:redis] Jan 31 10:49:09.349: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 31 10:49:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rc8hv" for this suite. Jan 31 10:49:33.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 10:49:33.532: INFO: namespace: e2e-tests-kubectl-rc8hv, resource: bindings, ignored listing per whitelist Jan 31 10:49:33.540: INFO: namespace e2e-tests-kubectl-rc8hv deletion completed in 24.17476934s • [SLOW TEST:37.745 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 31 10:49:33.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 31 10:49:33.820: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 43.029602ms)
Jan 31 10:49:33.904: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.988155ms)
Jan 31 10:49:33.941: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.3562ms)
Jan 31 10:49:33.962: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.880794ms)
Jan 31 10:49:33.980: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.444143ms)
Jan 31 10:49:34.002: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.758859ms)
Jan 31 10:49:34.023: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.619921ms)
Jan 31 10:49:34.032: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.231598ms)
Jan 31 10:49:34.042: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.591153ms)
Jan 31 10:49:34.050: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.800078ms)
Jan 31 10:49:34.057: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.984471ms)
Jan 31 10:49:34.064: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.239815ms)
Jan 31 10:49:34.071: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.512451ms)
Jan 31 10:49:34.077: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.077902ms)
Jan 31 10:49:34.085: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.838662ms)
Jan 31 10:49:34.093: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.905391ms)
Jan 31 10:49:34.098: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.886422ms)
Jan 31 10:49:34.104: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.550763ms)
Jan 31 10:49:34.111: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.938275ms)
Jan 31 10:49:34.118: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.984838ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:49:34.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-pdllp" for this suite.
Jan 31 10:49:40.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:49:40.268: INFO: namespace: e2e-tests-proxy-pdllp, resource: bindings, ignored listing per whitelist
Jan 31 10:49:40.299: INFO: namespace e2e-tests-proxy-pdllp deletion completed in 6.173870538s

• [SLOW TEST:6.759 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:49:40.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 31 10:49:48.469: INFO: 10 pods remaining
Jan 31 10:49:48.469: INFO: 10 pods has nil DeletionTimestamp
Jan 31 10:49:48.469: INFO: 
Jan 31 10:49:49.961: INFO: 10 pods remaining
Jan 31 10:49:49.962: INFO: 10 pods has nil DeletionTimestamp
Jan 31 10:49:49.962: INFO: 
Jan 31 10:49:50.590: INFO: 6 pods remaining
Jan 31 10:49:50.591: INFO: 0 pods has nil DeletionTimestamp
Jan 31 10:49:50.591: INFO: 
STEP: Gathering metrics
W0131 10:49:51.052429       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 10:49:51.052: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:49:51.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m5hrb" for this suite.
Jan 31 10:50:05.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:50:05.257: INFO: namespace: e2e-tests-gc-m5hrb, resource: bindings, ignored listing per whitelist
Jan 31 10:50:05.382: INFO: namespace e2e-tests-gc-m5hrb deletion completed in 14.325620749s

• [SLOW TEST:25.082 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:50:05.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-jktcj
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 31 10:50:05.675: INFO: Found 0 stateful pods, waiting for 3
Jan 31 10:50:15.691: INFO: Found 1 stateful pods, waiting for 3
Jan 31 10:50:25.710: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:50:25.711: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:50:25.711: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 10:50:35.697: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:50:35.697: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:50:35.697: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:50:35.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jktcj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:50:36.413: INFO: stderr: "I0131 10:50:35.982138      88 log.go:172] (0xc0006da2c0) (0xc000714640) Create stream\nI0131 10:50:35.982394      88 log.go:172] (0xc0006da2c0) (0xc000714640) Stream added, broadcasting: 1\nI0131 10:50:35.987655      88 log.go:172] (0xc0006da2c0) Reply frame received for 1\nI0131 10:50:35.987741      88 log.go:172] (0xc0006da2c0) (0xc0005a8e60) Create stream\nI0131 10:50:35.987754      88 log.go:172] (0xc0006da2c0) (0xc0005a8e60) Stream added, broadcasting: 3\nI0131 10:50:35.989350      88 log.go:172] (0xc0006da2c0) Reply frame received for 3\nI0131 10:50:35.989518      88 log.go:172] (0xc0006da2c0) (0xc00032e000) Create stream\nI0131 10:50:35.989543      88 log.go:172] (0xc0006da2c0) (0xc00032e000) Stream added, broadcasting: 5\nI0131 10:50:35.990664      88 log.go:172] (0xc0006da2c0) Reply frame received for 5\nI0131 10:50:36.266499      88 log.go:172] (0xc0006da2c0) Data frame received for 3\nI0131 10:50:36.266604      88 log.go:172] (0xc0005a8e60) (3) Data frame handling\nI0131 10:50:36.266636      88 log.go:172] (0xc0005a8e60) (3) Data frame sent\nI0131 10:50:36.396326      88 log.go:172] (0xc0006da2c0) Data frame received for 1\nI0131 10:50:36.396426      88 log.go:172] (0xc000714640) (1) Data frame handling\nI0131 10:50:36.396470      88 log.go:172] (0xc000714640) (1) Data frame sent\nI0131 10:50:36.396644      88 log.go:172] (0xc0006da2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0131 10:50:36.397557      88 log.go:172] (0xc0006da2c0) (0xc0005a8e60) Stream removed, broadcasting: 3\nI0131 10:50:36.397947      88 log.go:172] (0xc0006da2c0) (0xc00032e000) Stream removed, broadcasting: 5\nI0131 10:50:36.398007      88 log.go:172] (0xc0006da2c0) (0xc000714640) Stream removed, broadcasting: 1\nI0131 10:50:36.398018      88 log.go:172] (0xc0006da2c0) (0xc0005a8e60) Stream removed, broadcasting: 3\nI0131 10:50:36.398061      88 log.go:172] (0xc0006da2c0) (0xc00032e000) Stream removed, broadcasting: 5\n"
Jan 31 10:50:36.413: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:50:36.413: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 31 10:50:46.671: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 31 10:50:56.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jktcj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 10:50:57.700: INFO: stderr: "I0131 10:50:57.379267     110 log.go:172] (0xc0007362c0) (0xc0008257c0) Create stream\nI0131 10:50:57.379539     110 log.go:172] (0xc0007362c0) (0xc0008257c0) Stream added, broadcasting: 1\nI0131 10:50:57.386131     110 log.go:172] (0xc0007362c0) Reply frame received for 1\nI0131 10:50:57.387607     110 log.go:172] (0xc0007362c0) (0xc00090e000) Create stream\nI0131 10:50:57.387789     110 log.go:172] (0xc0007362c0) (0xc00090e000) Stream added, broadcasting: 3\nI0131 10:50:57.396023     110 log.go:172] (0xc0007362c0) Reply frame received for 3\nI0131 10:50:57.396127     110 log.go:172] (0xc0007362c0) (0xc000824fa0) Create stream\nI0131 10:50:57.396141     110 log.go:172] (0xc0007362c0) (0xc000824fa0) Stream added, broadcasting: 5\nI0131 10:50:57.397154     110 log.go:172] (0xc0007362c0) Reply frame received for 5\nI0131 10:50:57.531745     110 log.go:172] (0xc0007362c0) Data frame received for 3\nI0131 10:50:57.531848     110 log.go:172] (0xc00090e000) (3) Data frame handling\nI0131 10:50:57.531883     110 log.go:172] (0xc00090e000) (3) Data frame sent\nI0131 10:50:57.687566     110 log.go:172] (0xc0007362c0) (0xc00090e000) Stream removed, broadcasting: 3\nI0131 10:50:57.688089     110 log.go:172] (0xc0007362c0) Data frame received for 1\nI0131 10:50:57.688216     110 log.go:172] (0xc0008257c0) (1) Data frame handling\nI0131 10:50:57.688233     110 log.go:172] (0xc0007362c0) (0xc000824fa0) Stream removed, broadcasting: 5\nI0131 10:50:57.688289     110 log.go:172] (0xc0008257c0) (1) Data frame sent\nI0131 10:50:57.688345     110 log.go:172] (0xc0007362c0) (0xc0008257c0) Stream removed, broadcasting: 1\nI0131 10:50:57.688464     110 log.go:172] (0xc0007362c0) Go away received\nI0131 10:50:57.689128     110 log.go:172] (0xc0007362c0) (0xc0008257c0) Stream removed, broadcasting: 1\nI0131 10:50:57.689156     110 log.go:172] (0xc0007362c0) (0xc00090e000) Stream removed, broadcasting: 3\nI0131 10:50:57.689161     110 log.go:172] (0xc0007362c0) (0xc000824fa0) Stream removed, broadcasting: 5\n"
Jan 31 10:50:57.700: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 10:50:57.700: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 10:51:07.764: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:51:07.764: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:07.765: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:18.237: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:51:18.238: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:18.238: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:29.049: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:51:29.049: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:37.788: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:51:37.788: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 10:51:47.822: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 31 10:51:57.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jktcj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:51:58.714: INFO: stderr: "I0131 10:51:58.079449     133 log.go:172] (0xc0001386e0) (0xc000692780) Create stream\nI0131 10:51:58.079911     133 log.go:172] (0xc0001386e0) (0xc000692780) Stream added, broadcasting: 1\nI0131 10:51:58.088932     133 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0131 10:51:58.089042     133 log.go:172] (0xc0001386e0) (0xc000440780) Create stream\nI0131 10:51:58.089075     133 log.go:172] (0xc0001386e0) (0xc000440780) Stream added, broadcasting: 3\nI0131 10:51:58.091389     133 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0131 10:51:58.091430     133 log.go:172] (0xc0001386e0) (0xc00023cc80) Create stream\nI0131 10:51:58.091446     133 log.go:172] (0xc0001386e0) (0xc00023cc80) Stream added, broadcasting: 5\nI0131 10:51:58.096373     133 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0131 10:51:58.371194     133 log.go:172] (0xc0001386e0) Data frame received for 3\nI0131 10:51:58.371284     133 log.go:172] (0xc000440780) (3) Data frame handling\nI0131 10:51:58.371324     133 log.go:172] (0xc000440780) (3) Data frame sent\nI0131 10:51:58.695353     133 log.go:172] (0xc0001386e0) (0xc000440780) Stream removed, broadcasting: 3\nI0131 10:51:58.695741     133 log.go:172] (0xc0001386e0) Data frame received for 1\nI0131 10:51:58.695773     133 log.go:172] (0xc000692780) (1) Data frame handling\nI0131 10:51:58.695804     133 log.go:172] (0xc000692780) (1) Data frame sent\nI0131 10:51:58.695821     133 log.go:172] (0xc0001386e0) (0xc000692780) Stream removed, broadcasting: 1\nI0131 10:51:58.696292     133 log.go:172] (0xc0001386e0) (0xc00023cc80) Stream removed, broadcasting: 5\nI0131 10:51:58.696533     133 log.go:172] (0xc0001386e0) Go away received\nI0131 10:51:58.697298     133 log.go:172] (0xc0001386e0) (0xc000692780) Stream removed, broadcasting: 1\nI0131 10:51:58.697563     133 log.go:172] (0xc0001386e0) (0xc000440780) Stream removed, broadcasting: 3\nI0131 10:51:58.697681     133 log.go:172] (0xc0001386e0) (0xc00023cc80) Stream removed, broadcasting: 5\n"
Jan 31 10:51:58.714: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:51:58.714: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 10:51:59.042: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 31 10:52:09.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jktcj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 10:52:09.831: INFO: stderr: "I0131 10:52:09.448234     155 log.go:172] (0xc000148840) (0xc0007cd360) Create stream\nI0131 10:52:09.448480     155 log.go:172] (0xc000148840) (0xc0007cd360) Stream added, broadcasting: 1\nI0131 10:52:09.455410     155 log.go:172] (0xc000148840) Reply frame received for 1\nI0131 10:52:09.455560     155 log.go:172] (0xc000148840) (0xc000608000) Create stream\nI0131 10:52:09.455584     155 log.go:172] (0xc000148840) (0xc000608000) Stream added, broadcasting: 3\nI0131 10:52:09.458066     155 log.go:172] (0xc000148840) Reply frame received for 3\nI0131 10:52:09.458111     155 log.go:172] (0xc000148840) (0xc000654000) Create stream\nI0131 10:52:09.458137     155 log.go:172] (0xc000148840) (0xc000654000) Stream added, broadcasting: 5\nI0131 10:52:09.459771     155 log.go:172] (0xc000148840) Reply frame received for 5\nI0131 10:52:09.652165     155 log.go:172] (0xc000148840) Data frame received for 3\nI0131 10:52:09.652223     155 log.go:172] (0xc000608000) (3) Data frame handling\nI0131 10:52:09.652245     155 log.go:172] (0xc000608000) (3) Data frame sent\nI0131 10:52:09.816438     155 log.go:172] (0xc000148840) Data frame received for 1\nI0131 10:52:09.816983     155 log.go:172] (0xc000148840) (0xc000608000) Stream removed, broadcasting: 3\nI0131 10:52:09.817176     155 log.go:172] (0xc0007cd360) (1) Data frame handling\nI0131 10:52:09.817270     155 log.go:172] (0xc0007cd360) (1) Data frame sent\nI0131 10:52:09.817364     155 log.go:172] (0xc000148840) (0xc000654000) Stream removed, broadcasting: 5\nI0131 10:52:09.817474     155 log.go:172] (0xc000148840) (0xc0007cd360) Stream removed, broadcasting: 1\nI0131 10:52:09.817562     155 log.go:172] (0xc000148840) Go away received\nI0131 10:52:09.818141     155 log.go:172] (0xc000148840) (0xc0007cd360) Stream removed, broadcasting: 1\nI0131 10:52:09.818228     155 log.go:172] (0xc000148840) (0xc000608000) Stream removed, broadcasting: 3\nI0131 10:52:09.818280     155 log.go:172] (0xc000148840) (0xc000654000) Stream removed, broadcasting: 5\n"
Jan 31 10:52:09.831: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 10:52:09.831: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 10:52:19.913: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:52:19.913: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:19.913: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:29.947: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:52:29.948: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:29.948: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:39.975: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:52:39.976: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:39.976: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:52:50.437: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
Jan 31 10:52:50.437: INFO: Waiting for Pod e2e-tests-statefulset-jktcj/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 10:53:00.965: INFO: Waiting for StatefulSet e2e-tests-statefulset-jktcj/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 31 10:53:09.994: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jktcj
Jan 31 10:53:10.000: INFO: Scaling statefulset ss2 to 0
Jan 31 10:53:40.047: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 10:53:40.052: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:53:40.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-jktcj" for this suite.
Jan 31 10:53:48.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:53:48.202: INFO: namespace: e2e-tests-statefulset-jktcj, resource: bindings, ignored listing per whitelist
Jan 31 10:53:48.329: INFO: namespace e2e-tests-statefulset-jktcj deletion completed in 8.231426412s

• [SLOW TEST:222.947 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:53:48.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 10:53:48.713: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.631625ms)
Jan 31 10:53:48.722: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.560682ms)
Jan 31 10:53:48.727: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.763234ms)
Jan 31 10:53:48.739: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.092867ms)
Jan 31 10:53:48.762: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.031621ms)
Jan 31 10:53:48.775: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.392451ms)
Jan 31 10:53:48.788: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.518098ms)
Jan 31 10:53:48.801: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.494187ms)
Jan 31 10:53:48.814: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.573072ms)
Jan 31 10:53:48.832: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.394206ms)
Jan 31 10:53:48.895: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.186947ms)
Jan 31 10:53:48.952: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 57.006401ms)
Jan 31 10:53:48.968: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.462956ms)
Jan 31 10:53:48.978: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.41486ms)
Jan 31 10:53:48.984: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.608436ms)
Jan 31 10:53:48.990: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.749032ms)
Jan 31 10:53:48.995: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.932617ms)
Jan 31 10:53:49.000: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.208267ms)
Jan 31 10:53:49.005: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.749813ms)
Jan 31 10:53:49.009: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.961705ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:53:49.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-7xvjs" for this suite.
Jan 31 10:53:55.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:53:55.128: INFO: namespace: e2e-tests-proxy-7xvjs, resource: bindings, ignored listing per whitelist
Jan 31 10:53:55.270: INFO: namespace e2e-tests-proxy-7xvjs deletion completed in 6.257448378s

• [SLOW TEST:6.940 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:53:55.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 31 10:53:55.470: INFO: Waiting up to 5m0s for pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-8n89c" to be "success or failure"
Jan 31 10:53:55.479: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.238334ms
Jan 31 10:53:57.927: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45693284s
Jan 31 10:53:59.960: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489957608s
Jan 31 10:54:02.014: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544365464s
Jan 31 10:54:04.770: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.300665516s
Jan 31 10:54:06.783: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.312798322s
STEP: Saw pod success
Jan 31 10:54:06.783: INFO: Pod "pod-f9d3bba5-4417-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:54:06.786: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f9d3bba5-4417-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 10:54:06.861: INFO: Waiting for pod pod-f9d3bba5-4417-11ea-aae6-0242ac110005 to disappear
Jan 31 10:54:06.870: INFO: Pod pod-f9d3bba5-4417-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:54:06.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8n89c" for this suite.
Jan 31 10:54:12.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:54:13.123: INFO: namespace: e2e-tests-emptydir-8n89c, resource: bindings, ignored listing per whitelist
Jan 31 10:54:13.156: INFO: namespace e2e-tests-emptydir-8n89c deletion completed in 6.214098497s

• [SLOW TEST:17.886 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:54:13.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-04895b8f-4418-11ea-aae6-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-04895b64-4418-11ea-aae6-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 31 10:54:13.548: INFO: Waiting up to 5m0s for pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-2kc5m" to be "success or failure"
Jan 31 10:54:13.641: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 92.815511ms
Jan 31 10:54:15.650: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10187777s
Jan 31 10:54:17.677: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129043967s
Jan 31 10:54:20.383: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.834726926s
Jan 31 10:54:22.400: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.852369658s
Jan 31 10:54:24.417: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.869447862s
STEP: Saw pod success
Jan 31 10:54:24.418: INFO: Pod "projected-volume-04895a85-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:54:24.425: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-04895a85-4418-11ea-aae6-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 31 10:54:24.506: INFO: Waiting for pod projected-volume-04895a85-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:54:24.518: INFO: Pod projected-volume-04895a85-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:54:24.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2kc5m" for this suite.
Jan 31 10:54:30.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:54:30.767: INFO: namespace: e2e-tests-projected-2kc5m, resource: bindings, ignored listing per whitelist
Jan 31 10:54:30.788: INFO: namespace e2e-tests-projected-2kc5m deletion completed in 6.250459949s

• [SLOW TEST:17.631 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:54:30.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 31 10:54:30.940: INFO: Waiting up to 5m0s for pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-var-expansion-mtn7f" to be "success or failure"
Jan 31 10:54:31.093: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 152.722957ms
Jan 31 10:54:33.107: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166683782s
Jan 31 10:54:35.150: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209514396s
Jan 31 10:54:37.273: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332742719s
Jan 31 10:54:39.291: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35056395s
Jan 31 10:54:41.338: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397610187s
STEP: Saw pod success
Jan 31 10:54:41.338: INFO: Pod "var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:54:41.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 10:54:41.584: INFO: Waiting for pod var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:54:41.604: INFO: Pod var-expansion-0ef90a9c-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:54:41.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-mtn7f" for this suite.
Jan 31 10:54:48.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:54:48.635: INFO: namespace: e2e-tests-var-expansion-mtn7f, resource: bindings, ignored listing per whitelist
Jan 31 10:54:48.677: INFO: namespace e2e-tests-var-expansion-mtn7f deletion completed in 7.050224146s

• [SLOW TEST:17.888 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:54:48.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hk8ch
Jan 31 10:54:58.969: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hk8ch
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 10:54:58.975: INFO: Initial restart count of pod liveness-http is 0
Jan 31 10:55:25.577: INFO: Restart count of pod e2e-tests-container-probe-hk8ch/liveness-http is now 1 (26.601717987s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:55:25.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hk8ch" for this suite.
Jan 31 10:55:31.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:55:31.942: INFO: namespace: e2e-tests-container-probe-hk8ch, resource: bindings, ignored listing per whitelist
Jan 31 10:55:32.062: INFO: namespace e2e-tests-container-probe-hk8ch deletion completed in 6.411660091s

• [SLOW TEST:43.384 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:55:32.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0131 10:55:34.869692       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 10:55:34.869: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:55:34.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jg9r7" for this suite.
Jan 31 10:55:42.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:55:43.079: INFO: namespace: e2e-tests-gc-jg9r7, resource: bindings, ignored listing per whitelist
Jan 31 10:55:43.121: INFO: namespace e2e-tests-gc-jg9r7 deletion completed in 8.239724981s

• [SLOW TEST:11.058 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:55:43.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 31 10:55:43.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 31 10:55:43.706: INFO: stderr: ""
Jan 31 10:55:43.706: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:55:43.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mk6bs" for this suite.
Jan 31 10:55:49.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:55:49.826: INFO: namespace: e2e-tests-kubectl-mk6bs, resource: bindings, ignored listing per whitelist
Jan 31 10:55:49.932: INFO: namespace e2e-tests-kubectl-mk6bs deletion completed in 6.210429782s

• [SLOW TEST:6.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:55:49.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-3e2a485c-4418-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-3e2a485c-4418-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:56:04.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-69psq" for this suite.
Jan 31 10:56:28.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:56:28.602: INFO: namespace: e2e-tests-projected-69psq, resource: bindings, ignored listing per whitelist
Jan 31 10:56:28.683: INFO: namespace e2e-tests-projected-69psq deletion completed in 24.271394767s

• [SLOW TEST:38.751 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:56:28.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 31 10:56:42.085: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:56:43.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-jk6gk" for this suite.
Jan 31 10:57:11.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:57:12.013: INFO: namespace: e2e-tests-replicaset-jk6gk, resource: bindings, ignored listing per whitelist
Jan 31 10:57:12.048: INFO: namespace e2e-tests-replicaset-jk6gk deletion completed in 28.855916851s

• [SLOW TEST:43.364 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:57:12.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-hmvc
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 10:57:12.357: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hmvc" in namespace "e2e-tests-subpath-qrgsc" to be "success or failure"
Jan 31 10:57:12.433: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 75.547106ms
Jan 31 10:57:14.985: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628032947s
Jan 31 10:57:17.004: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.646728944s
Jan 31 10:57:19.025: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667955314s
Jan 31 10:57:21.045: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687955128s
Jan 31 10:57:23.065: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.708276195s
Jan 31 10:57:25.081: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723703573s
Jan 31 10:57:27.097: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.739634095s
Jan 31 10:57:29.158: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 16.801013038s
Jan 31 10:57:31.177: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 18.819702776s
Jan 31 10:57:33.197: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 20.840243933s
Jan 31 10:57:35.212: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 22.85494272s
Jan 31 10:57:37.229: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 24.872509982s
Jan 31 10:57:39.249: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 26.892183537s
Jan 31 10:57:41.272: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 28.914815154s
Jan 31 10:57:43.295: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 30.938111336s
Jan 31 10:57:45.327: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Running", Reason="", readiness=false. Elapsed: 32.970334477s
Jan 31 10:57:47.482: INFO: Pod "pod-subpath-test-configmap-hmvc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.125521881s
STEP: Saw pod success
Jan 31 10:57:47.483: INFO: Pod "pod-subpath-test-configmap-hmvc" satisfied condition "success or failure"
Jan 31 10:57:47.512: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-hmvc container test-container-subpath-configmap-hmvc: 
STEP: delete the pod
Jan 31 10:57:47.800: INFO: Waiting for pod pod-subpath-test-configmap-hmvc to disappear
Jan 31 10:57:47.873: INFO: Pod pod-subpath-test-configmap-hmvc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hmvc
Jan 31 10:57:47.873: INFO: Deleting pod "pod-subpath-test-configmap-hmvc" in namespace "e2e-tests-subpath-qrgsc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:57:47.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-qrgsc" for this suite.
Jan 31 10:57:55.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:57:56.161: INFO: namespace: e2e-tests-subpath-qrgsc, resource: bindings, ignored listing per whitelist
Jan 31 10:57:56.173: INFO: namespace e2e-tests-subpath-qrgsc deletion completed in 8.274703803s

• [SLOW TEST:44.124 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:57:56.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 10:57:56.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-2sqcv" to be "success or failure"
Jan 31 10:57:56.371: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.486001ms
Jan 31 10:57:58.410: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049748045s
Jan 31 10:58:00.427: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067366094s
Jan 31 10:58:02.484: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123854296s
Jan 31 10:58:04.528: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167615905s
Jan 31 10:58:06.583: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.222938073s
STEP: Saw pod success
Jan 31 10:58:06.584: INFO: Pod "downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:58:06.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 10:58:06.805: INFO: Waiting for pod downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:58:06.843: INFO: Pod downwardapi-volume-8968af22-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:58:06.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2sqcv" for this suite.
Jan 31 10:58:12.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:58:13.130: INFO: namespace: e2e-tests-downward-api-2sqcv, resource: bindings, ignored listing per whitelist
Jan 31 10:58:13.149: INFO: namespace e2e-tests-downward-api-2sqcv deletion completed in 6.22491745s

• [SLOW TEST:16.976 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:58:13.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 31 10:58:13.409: INFO: Waiting up to 5m0s for pod "pod-939249e7-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-b4bkl" to be "success or failure"
Jan 31 10:58:13.422: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.343682ms
Jan 31 10:58:15.461: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051110941s
Jan 31 10:58:17.544: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134600125s
Jan 31 10:58:19.577: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168009427s
Jan 31 10:58:21.590: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180904083s
Jan 31 10:58:23.615: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205090405s
Jan 31 10:58:25.634: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.224643631s
STEP: Saw pod success
Jan 31 10:58:25.634: INFO: Pod "pod-939249e7-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:58:25.644: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-939249e7-4418-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 10:58:26.410: INFO: Waiting for pod pod-939249e7-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:58:26.426: INFO: Pod pod-939249e7-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:58:26.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b4bkl" for this suite.
Jan 31 10:58:32.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:58:32.706: INFO: namespace: e2e-tests-emptydir-b4bkl, resource: bindings, ignored listing per whitelist
Jan 31 10:58:32.728: INFO: namespace e2e-tests-emptydir-b4bkl deletion completed in 6.291985954s

• [SLOW TEST:19.579 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:58:32.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 10:58:32.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-ds2rv" to be "success or failure"
Jan 31 10:58:32.914: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.73832ms
Jan 31 10:58:35.082: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175700291s
Jan 31 10:58:37.101: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194623803s
Jan 31 10:58:39.191: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28543341s
Jan 31 10:58:41.255: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34877043s
Jan 31 10:58:43.281: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.374881538s
STEP: Saw pod success
Jan 31 10:58:43.281: INFO: Pod "downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:58:43.285: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 10:58:43.678: INFO: Waiting for pod downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:58:43.688: INFO: Pod downwardapi-volume-9f3166ff-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:58:43.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ds2rv" for this suite.
Jan 31 10:58:50.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:58:50.691: INFO: namespace: e2e-tests-downward-api-ds2rv, resource: bindings, ignored listing per whitelist
Jan 31 10:58:50.896: INFO: namespace e2e-tests-downward-api-ds2rv deletion completed in 7.195951195s

• [SLOW TEST:18.167 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:58:50.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-aa0d7de1-4418-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 10:58:51.148: INFO: Waiting up to 5m0s for pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-hk5vs" to be "success or failure"
Jan 31 10:58:51.204: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.286716ms
Jan 31 10:58:53.212: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064236905s
Jan 31 10:58:55.225: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077519069s
Jan 31 10:58:57.370: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222374018s
Jan 31 10:58:59.405: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257435351s
Jan 31 10:59:01.523: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.375036689s
STEP: Saw pod success
Jan 31 10:59:01.523: INFO: Pod "pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 10:59:01.545: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 31 10:59:01.742: INFO: Waiting for pod pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005 to disappear
Jan 31 10:59:01.771: INFO: Pod pod-secrets-aa0e5d70-4418-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 10:59:01.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hk5vs" for this suite.
Jan 31 10:59:07.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 10:59:07.930: INFO: namespace: e2e-tests-secrets-hk5vs, resource: bindings, ignored listing per whitelist
Jan 31 10:59:07.986: INFO: namespace e2e-tests-secrets-hk5vs deletion completed in 6.200525295s

• [SLOW TEST:17.090 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 10:59:07.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wqmbh
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wqmbh
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wqmbh
Jan 31 10:59:08.286: INFO: Found 0 stateful pods, waiting for 1
Jan 31 10:59:18.302: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 31 10:59:18.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:59:19.136: INFO: stderr: "I0131 10:59:18.708245     198 log.go:172] (0xc0006e4370) (0xc000726640) Create stream\nI0131 10:59:18.708937     198 log.go:172] (0xc0006e4370) (0xc000726640) Stream added, broadcasting: 1\nI0131 10:59:18.721128     198 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0131 10:59:18.721208     198 log.go:172] (0xc0006e4370) (0xc0005c8c80) Create stream\nI0131 10:59:18.721221     198 log.go:172] (0xc0006e4370) (0xc0005c8c80) Stream added, broadcasting: 3\nI0131 10:59:18.723237     198 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0131 10:59:18.723269     198 log.go:172] (0xc0006e4370) (0xc00079c000) Create stream\nI0131 10:59:18.723285     198 log.go:172] (0xc0006e4370) (0xc00079c000) Stream added, broadcasting: 5\nI0131 10:59:18.726695     198 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0131 10:59:18.954769     198 log.go:172] (0xc0006e4370) Data frame received for 3\nI0131 10:59:18.954901     198 log.go:172] (0xc0005c8c80) (3) Data frame handling\nI0131 10:59:18.954939     198 log.go:172] (0xc0005c8c80) (3) Data frame sent\nI0131 10:59:19.116800     198 log.go:172] (0xc0006e4370) (0xc00079c000) Stream removed, broadcasting: 5\nI0131 10:59:19.118101     198 log.go:172] (0xc0006e4370) Data frame received for 1\nI0131 10:59:19.118388     198 log.go:172] (0xc0006e4370) (0xc0005c8c80) Stream removed, broadcasting: 3\nI0131 10:59:19.118600     198 log.go:172] (0xc000726640) (1) Data frame handling\nI0131 10:59:19.118661     198 log.go:172] (0xc000726640) (1) Data frame sent\nI0131 10:59:19.118704     198 log.go:172] (0xc0006e4370) (0xc000726640) Stream removed, broadcasting: 1\nI0131 10:59:19.119549     198 log.go:172] (0xc0006e4370) (0xc000726640) Stream removed, broadcasting: 1\nI0131 10:59:19.119570     198 log.go:172] (0xc0006e4370) (0xc0005c8c80) Stream removed, broadcasting: 3\nI0131 10:59:19.119578     198 log.go:172] (0xc0006e4370) (0xc00079c000) Stream removed, broadcasting: 5\nI0131 10:59:19.121271     198 log.go:172] (0xc0006e4370) Go away received\n"
Jan 31 10:59:19.137: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:59:19.137: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 10:59:19.152: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 10:59:19.152: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 10:59:19.165: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 31 10:59:29.230: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 10:59:29.231: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 10:59:29.231: INFO: 
Jan 31 10:59:29.231: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 31 10:59:30.736: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.961651841s
Jan 31 10:59:31.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.456553698s
Jan 31 10:59:32.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.42993382s
Jan 31 10:59:33.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.415758307s
Jan 31 10:59:34.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.383623028s
Jan 31 10:59:35.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.360658398s
Jan 31 10:59:37.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.339061787s
Jan 31 10:59:38.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 642.151856ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wqmbh
Jan 31 10:59:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 10:59:40.821: INFO: stderr: "I0131 10:59:40.038353     219 log.go:172] (0xc00071c370) (0xc00073c640) Create stream\nI0131 10:59:40.038751     219 log.go:172] (0xc00071c370) (0xc00073c640) Stream added, broadcasting: 1\nI0131 10:59:40.055613     219 log.go:172] (0xc00071c370) Reply frame received for 1\nI0131 10:59:40.055714     219 log.go:172] (0xc00071c370) (0xc0005b6be0) Create stream\nI0131 10:59:40.055728     219 log.go:172] (0xc00071c370) (0xc0005b6be0) Stream added, broadcasting: 3\nI0131 10:59:40.065243     219 log.go:172] (0xc00071c370) Reply frame received for 3\nI0131 10:59:40.065326     219 log.go:172] (0xc00071c370) (0xc000374000) Create stream\nI0131 10:59:40.065348     219 log.go:172] (0xc00071c370) (0xc000374000) Stream added, broadcasting: 5\nI0131 10:59:40.070679     219 log.go:172] (0xc00071c370) Reply frame received for 5\nI0131 10:59:40.407034     219 log.go:172] (0xc00071c370) Data frame received for 3\nI0131 10:59:40.407395     219 log.go:172] (0xc0005b6be0) (3) Data frame handling\nI0131 10:59:40.407451     219 log.go:172] (0xc0005b6be0) (3) Data frame sent\nI0131 10:59:40.800881     219 log.go:172] (0xc00071c370) Data frame received for 1\nI0131 10:59:40.801046     219 log.go:172] (0xc00071c370) (0xc0005b6be0) Stream removed, broadcasting: 3\nI0131 10:59:40.801109     219 log.go:172] (0xc00073c640) (1) Data frame handling\nI0131 10:59:40.801130     219 log.go:172] (0xc00073c640) (1) Data frame sent\nI0131 10:59:40.801190     219 log.go:172] (0xc00071c370) (0xc000374000) Stream removed, broadcasting: 5\nI0131 10:59:40.801214     219 log.go:172] (0xc00071c370) (0xc00073c640) Stream removed, broadcasting: 1\nI0131 10:59:40.801222     219 log.go:172] (0xc00071c370) Go away received\nI0131 10:59:40.802434     219 log.go:172] (0xc00071c370) (0xc00073c640) Stream removed, broadcasting: 1\nI0131 10:59:40.802594     219 log.go:172] (0xc00071c370) (0xc0005b6be0) Stream removed, broadcasting: 3\nI0131 10:59:40.802630     219 log.go:172] (0xc00071c370) (0xc000374000) Stream removed, broadcasting: 5\n"
Jan 31 10:59:40.822: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 10:59:40.822: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 10:59:40.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 10:59:41.612: INFO: stderr: "I0131 10:59:41.099388     240 log.go:172] (0xc00013a160) (0xc000278e60) Create stream\nI0131 10:59:41.099669     240 log.go:172] (0xc00013a160) (0xc000278e60) Stream added, broadcasting: 1\nI0131 10:59:41.108434     240 log.go:172] (0xc00013a160) Reply frame received for 1\nI0131 10:59:41.108513     240 log.go:172] (0xc00013a160) (0xc0006d4000) Create stream\nI0131 10:59:41.108524     240 log.go:172] (0xc00013a160) (0xc0006d4000) Stream added, broadcasting: 3\nI0131 10:59:41.110822     240 log.go:172] (0xc00013a160) Reply frame received for 3\nI0131 10:59:41.110984     240 log.go:172] (0xc00013a160) (0xc000544000) Create stream\nI0131 10:59:41.111005     240 log.go:172] (0xc00013a160) (0xc000544000) Stream added, broadcasting: 5\nI0131 10:59:41.112382     240 log.go:172] (0xc00013a160) Reply frame received for 5\nI0131 10:59:41.330870     240 log.go:172] (0xc00013a160) Data frame received for 3\nI0131 10:59:41.330949     240 log.go:172] (0xc0006d4000) (3) Data frame handling\nI0131 10:59:41.330977     240 log.go:172] (0xc0006d4000) (3) Data frame sent\nI0131 10:59:41.331150     240 log.go:172] (0xc00013a160) Data frame received for 5\nI0131 10:59:41.331314     240 log.go:172] (0xc000544000) (5) Data frame handling\nI0131 10:59:41.331389     240 log.go:172] (0xc000544000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0131 10:59:41.590932     240 log.go:172] (0xc00013a160) Data frame received for 1\nI0131 10:59:41.591005     240 log.go:172] (0xc000278e60) (1) Data frame handling\nI0131 10:59:41.591030     240 log.go:172] (0xc000278e60) (1) Data frame sent\nI0131 10:59:41.599094     240 log.go:172] (0xc00013a160) (0xc000278e60) Stream removed, broadcasting: 1\nI0131 10:59:41.599456     240 log.go:172] (0xc00013a160) (0xc000544000) Stream removed, broadcasting: 5\nI0131 10:59:41.599522     240 log.go:172] (0xc00013a160) (0xc0006d4000) Stream removed, broadcasting: 3\nI0131 10:59:41.599544     240 log.go:172] (0xc00013a160) Go away received\nI0131 10:59:41.599757     240 log.go:172] (0xc00013a160) (0xc000278e60) Stream removed, broadcasting: 1\nI0131 10:59:41.599771     240 log.go:172] (0xc00013a160) (0xc0006d4000) Stream removed, broadcasting: 3\nI0131 10:59:41.599783     240 log.go:172] (0xc00013a160) (0xc000544000) Stream removed, broadcasting: 5\n"
Jan 31 10:59:41.612: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 10:59:41.612: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 10:59:41.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 10:59:41.955: INFO: stderr: "I0131 10:59:41.737623     262 log.go:172] (0xc000700370) (0xc000635360) Create stream\nI0131 10:59:41.737687     262 log.go:172] (0xc000700370) (0xc000635360) Stream added, broadcasting: 1\nI0131 10:59:41.742046     262 log.go:172] (0xc000700370) Reply frame received for 1\nI0131 10:59:41.742066     262 log.go:172] (0xc000700370) (0xc000656000) Create stream\nI0131 10:59:41.742074     262 log.go:172] (0xc000700370) (0xc000656000) Stream added, broadcasting: 3\nI0131 10:59:41.743303     262 log.go:172] (0xc000700370) Reply frame received for 3\nI0131 10:59:41.743386     262 log.go:172] (0xc000700370) (0xc0006b6000) Create stream\nI0131 10:59:41.743477     262 log.go:172] (0xc000700370) (0xc0006b6000) Stream added, broadcasting: 5\nI0131 10:59:41.745056     262 log.go:172] (0xc000700370) Reply frame received for 5\nI0131 10:59:41.831056     262 log.go:172] (0xc000700370) Data frame received for 3\nI0131 10:59:41.831131     262 log.go:172] (0xc000656000) (3) Data frame handling\nI0131 10:59:41.831152     262 log.go:172] (0xc000656000) (3) Data frame sent\nI0131 10:59:41.832507     262 log.go:172] (0xc000700370) Data frame received for 5\nI0131 10:59:41.832524     262 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0131 10:59:41.832535     262 log.go:172] (0xc0006b6000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0131 10:59:41.945353     262 log.go:172] (0xc000700370) (0xc000656000) Stream removed, broadcasting: 3\nI0131 10:59:41.945651     262 log.go:172] (0xc000700370) Data frame received for 1\nI0131 10:59:41.945672     262 log.go:172] (0xc000635360) (1) Data frame handling\nI0131 10:59:41.945706     262 log.go:172] (0xc000635360) (1) Data frame sent\nI0131 10:59:41.945717     262 log.go:172] (0xc000700370) (0xc000635360) Stream removed, broadcasting: 1\nI0131 10:59:41.945905     262 log.go:172] (0xc000700370) (0xc0006b6000) Stream removed, broadcasting: 5\nI0131 10:59:41.946029     262 log.go:172] (0xc000700370) Go away received\nI0131 10:59:41.946378     262 log.go:172] (0xc000700370) (0xc000635360) Stream removed, broadcasting: 1\nI0131 10:59:41.946397     262 log.go:172] (0xc000700370) (0xc000656000) Stream removed, broadcasting: 3\nI0131 10:59:41.946406     262 log.go:172] (0xc000700370) (0xc0006b6000) Stream removed, broadcasting: 5\n"
Jan 31 10:59:41.955: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 10:59:41.955: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 10:59:41.974: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:59:41.974: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:59:41.974: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 10:59:51.988: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:59:51.989: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 10:59:51.989: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 31 10:59:51.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:59:52.656: INFO: stderr: "I0131 10:59:52.255420     284 log.go:172] (0xc0005ea4d0) (0xc0005af360) Create stream\nI0131 10:59:52.256097     284 log.go:172] (0xc0005ea4d0) (0xc0005af360) Stream added, broadcasting: 1\nI0131 10:59:52.264612     284 log.go:172] (0xc0005ea4d0) Reply frame received for 1\nI0131 10:59:52.264660     284 log.go:172] (0xc0005ea4d0) (0xc0000ea000) Create stream\nI0131 10:59:52.264695     284 log.go:172] (0xc0005ea4d0) (0xc0000ea000) Stream added, broadcasting: 3\nI0131 10:59:52.265751     284 log.go:172] (0xc0005ea4d0) Reply frame received for 3\nI0131 10:59:52.265800     284 log.go:172] (0xc0005ea4d0) (0xc0000ea0a0) Create stream\nI0131 10:59:52.265809     284 log.go:172] (0xc0005ea4d0) (0xc0000ea0a0) Stream added, broadcasting: 5\nI0131 10:59:52.267136     284 log.go:172] (0xc0005ea4d0) Reply frame received for 5\nI0131 10:59:52.402040     284 log.go:172] (0xc0005ea4d0) Data frame received for 3\nI0131 10:59:52.402132     284 log.go:172] (0xc0000ea000) (3) Data frame handling\nI0131 10:59:52.402151     284 log.go:172] (0xc0000ea000) (3) Data frame sent\nI0131 10:59:52.647286     284 log.go:172] (0xc0005ea4d0) (0xc0000ea000) Stream removed, broadcasting: 3\nI0131 10:59:52.647563     284 log.go:172] (0xc0005ea4d0) Data frame received for 1\nI0131 10:59:52.647599     284 log.go:172] (0xc0005af360) (1) Data frame handling\nI0131 10:59:52.647623     284 log.go:172] (0xc0005af360) (1) Data frame sent\nI0131 10:59:52.647642     284 log.go:172] (0xc0005ea4d0) (0xc0005af360) Stream removed, broadcasting: 1\nI0131 10:59:52.647681     284 log.go:172] (0xc0005ea4d0) (0xc0000ea0a0) Stream removed, broadcasting: 5\nI0131 10:59:52.647814     284 log.go:172] (0xc0005ea4d0) Go away received\nI0131 10:59:52.648162     284 log.go:172] (0xc0005ea4d0) (0xc0005af360) Stream removed, broadcasting: 1\nI0131 10:59:52.648184     284 log.go:172] (0xc0005ea4d0) (0xc0000ea000) Stream removed, broadcasting: 3\nI0131 10:59:52.648194     284 log.go:172] (0xc0005ea4d0) (0xc0000ea0a0) Stream removed, broadcasting: 5\n"
Jan 31 10:59:52.657: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:59:52.657: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 10:59:52.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:59:53.266: INFO: stderr: "I0131 10:59:52.834828     306 log.go:172] (0xc000730370) (0xc0005ed4a0) Create stream\nI0131 10:59:52.835065     306 log.go:172] (0xc000730370) (0xc0005ed4a0) Stream added, broadcasting: 1\nI0131 10:59:52.843290     306 log.go:172] (0xc000730370) Reply frame received for 1\nI0131 10:59:52.843341     306 log.go:172] (0xc000730370) (0xc000750000) Create stream\nI0131 10:59:52.843350     306 log.go:172] (0xc000730370) (0xc000750000) Stream added, broadcasting: 3\nI0131 10:59:52.844941     306 log.go:172] (0xc000730370) Reply frame received for 3\nI0131 10:59:52.844968     306 log.go:172] (0xc000730370) (0xc0006a4000) Create stream\nI0131 10:59:52.844976     306 log.go:172] (0xc000730370) (0xc0006a4000) Stream added, broadcasting: 5\nI0131 10:59:52.848367     306 log.go:172] (0xc000730370) Reply frame received for 5\nI0131 10:59:53.090103     306 log.go:172] (0xc000730370) Data frame received for 3\nI0131 10:59:53.090185     306 log.go:172] (0xc000750000) (3) Data frame handling\nI0131 10:59:53.090212     306 log.go:172] (0xc000750000) (3) Data frame sent\nI0131 10:59:53.257506     306 log.go:172] (0xc000730370) Data frame received for 1\nI0131 10:59:53.257695     306 log.go:172] (0xc000730370) (0xc000750000) Stream removed, broadcasting: 3\nI0131 10:59:53.257764     306 log.go:172] (0xc0005ed4a0) (1) Data frame handling\nI0131 10:59:53.257777     306 log.go:172] (0xc0005ed4a0) (1) Data frame sent\nI0131 10:59:53.257786     306 log.go:172] (0xc000730370) (0xc0005ed4a0) Stream removed, broadcasting: 1\nI0131 10:59:53.257811     306 log.go:172] (0xc000730370) (0xc0006a4000) Stream removed, broadcasting: 5\nI0131 10:59:53.257859     306 log.go:172] (0xc000730370) Go away received\nI0131 10:59:53.258171     306 log.go:172] (0xc000730370) (0xc0005ed4a0) Stream removed, broadcasting: 1\nI0131 10:59:53.258203     306 log.go:172] (0xc000730370) (0xc000750000) Stream removed, broadcasting: 3\nI0131 10:59:53.258213     306 log.go:172] (0xc000730370) (0xc0006a4000) Stream removed, broadcasting: 5\n"
Jan 31 10:59:53.266: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:59:53.266: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 10:59:53.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 10:59:54.181: INFO: stderr: "I0131 10:59:53.780212     329 log.go:172] (0xc00072a370) (0xc000770640) Create stream\nI0131 10:59:53.780659     329 log.go:172] (0xc00072a370) (0xc000770640) Stream added, broadcasting: 1\nI0131 10:59:53.791101     329 log.go:172] (0xc00072a370) Reply frame received for 1\nI0131 10:59:53.791150     329 log.go:172] (0xc00072a370) (0xc0007706e0) Create stream\nI0131 10:59:53.791158     329 log.go:172] (0xc00072a370) (0xc0007706e0) Stream added, broadcasting: 3\nI0131 10:59:53.795803     329 log.go:172] (0xc00072a370) Reply frame received for 3\nI0131 10:59:53.795823     329 log.go:172] (0xc00072a370) (0xc00067adc0) Create stream\nI0131 10:59:53.795832     329 log.go:172] (0xc00072a370) (0xc00067adc0) Stream added, broadcasting: 5\nI0131 10:59:53.802261     329 log.go:172] (0xc00072a370) Reply frame received for 5\nI0131 10:59:54.059397     329 log.go:172] (0xc00072a370) Data frame received for 3\nI0131 10:59:54.059450     329 log.go:172] (0xc0007706e0) (3) Data frame handling\nI0131 10:59:54.059461     329 log.go:172] (0xc0007706e0) (3) Data frame sent\nI0131 10:59:54.174035     329 log.go:172] (0xc00072a370) Data frame received for 1\nI0131 10:59:54.174116     329 log.go:172] (0xc00072a370) (0xc00067adc0) Stream removed, broadcasting: 5\nI0131 10:59:54.174169     329 log.go:172] (0xc000770640) (1) Data frame handling\nI0131 10:59:54.174179     329 log.go:172] (0xc000770640) (1) Data frame sent\nI0131 10:59:54.174196     329 log.go:172] (0xc00072a370) (0xc0007706e0) Stream removed, broadcasting: 3\nI0131 10:59:54.174216     329 log.go:172] (0xc00072a370) (0xc000770640) Stream removed, broadcasting: 1\nI0131 10:59:54.174235     329 log.go:172] (0xc00072a370) Go away received\nI0131 10:59:54.174829     329 log.go:172] (0xc00072a370) (0xc000770640) Stream removed, broadcasting: 1\nI0131 10:59:54.174893     329 log.go:172] (0xc00072a370) (0xc0007706e0) Stream removed, broadcasting: 3\nI0131 10:59:54.174934     329 log.go:172] (0xc00072a370) (0xc00067adc0) Stream removed, broadcasting: 5\n"
Jan 31 10:59:54.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 10:59:54.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 10:59:54.181: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 10:59:54.192: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 31 11:00:04.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 11:00:04.216: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 11:00:04.216: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 11:00:04.249: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:04.249: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:04.250: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:04.250: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:04.250: INFO: 
Jan 31 11:00:04.250: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:06.239: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:06.240: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:06.240: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:06.240: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:06.240: INFO: 
Jan 31 11:00:06.240: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:07.519: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:07.519: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:07.519: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:07.519: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:07.519: INFO: 
Jan 31 11:00:07.519: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:08.554: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:08.554: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:08.554: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:08.554: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:08.554: INFO: 
Jan 31 11:00:08.554: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:09.704: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:09.704: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:09.704: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:09.704: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:09.704: INFO: 
Jan 31 11:00:09.704: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:11.032: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:11.032: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:11.032: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:11.032: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:11.032: INFO: 
Jan 31 11:00:11.032: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:12.069: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:12.069: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:12.069: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:12.069: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:12.069: INFO: 
Jan 31 11:00:12.069: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:13.114: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:13.114: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:13.114: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:13.114: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:13.114: INFO: 
Jan 31 11:00:13.114: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 11:00:14.130: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 31 11:00:14.130: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:08 +0000 UTC  }]
Jan 31 11:00:14.130: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 10:59:29 +0000 UTC  }]
Jan 31 11:00:14.131: INFO: 
Jan 31 11:00:14.131: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wqmbh
Jan 31 11:00:15.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:00:15.393: INFO: rc: 1
Jan 31 11:00:15.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001a7cf90 exit status 1   true [0xc000499190 0xc0004991b0 0xc000499210] [0xc000499190 0xc0004991b0 0xc000499210] [0xc0004991a8 0xc0004991f0] [0x935700 0x935700] 0xc001de2180 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 31 11:00:25.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:00:25.565: INFO: rc: 1
Jan 31 11:00:25.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a7d0b0 exit status 1   true [0xc000499218 0xc0004992c8 0xc000499308] [0xc000499218 0xc0004992c8 0xc000499308] [0xc0004992c0 0xc000499300] [0x935700 0x935700] 0xc001de2420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:00:35.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:00:35.730: INFO: rc: 1
Jan 31 11:00:35.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00173ccc0 exit status 1   true [0xc00209ec10 0xc00209ec28 0xc00209ec40] [0xc00209ec10 0xc00209ec28 0xc00209ec40] [0xc00209ec20 0xc00209ec38] [0x935700 0x935700] 0xc001cc8780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:00:45.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:00:45.870: INFO: rc: 1
Jan 31 11:00:45.870: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a7d380 exit status 1   true [0xc000499340 0xc000499370 0xc0004993d8] [0xc000499340 0xc000499370 0xc0004993d8] [0xc000499358 0xc0004993b8] [0x935700 0x935700] 0xc001de3b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:00:55.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:00:56.063: INFO: rc: 1
Jan 31 11:00:56.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d06180 exit status 1   true [0xc00000e188 0xc00000e258 0xc00000ebc8] [0xc00000e188 0xc00000e258 0xc00000ebc8] [0xc00000e1c8 0xc00000eb98] [0x935700 0x935700] 0xc00199a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:06.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:06.250: INFO: rc: 1
Jan 31 11:01:06.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577ad0 exit status 1   true [0xc00016e000 0xc000436138 0xc0004361c8] [0xc00016e000 0xc000436138 0xc0004361c8] [0xc000436098 0xc000436198] [0x935700 0x935700] 0xc0014b2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:16.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:16.430: INFO: rc: 1
Jan 31 11:01:16.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca240 exit status 1   true [0xc001226010 0xc001226040 0xc001226058] [0xc001226010 0xc001226040 0xc001226058] [0xc001226038 0xc001226050] [0x935700 0x935700] 0xc0010661e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:26.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:26.683: INFO: rc: 1
Jan 31 11:01:26.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d062d0 exit status 1   true [0xc00000ebe0 0xc00000ec90 0xc00000ed38] [0xc00000ebe0 0xc00000ec90 0xc00000ed38] [0xc00000ec48 0xc00000ed20] [0x935700 0x935700] 0xc00199ad20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:36.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:36.860: INFO: rc: 1
Jan 31 11:01:36.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577bf0 exit status 1   true [0xc0004361d8 0xc000436210 0xc0004362a0] [0xc0004361d8 0xc000436210 0xc0004362a0] [0xc0004361f8 0xc000436268] [0x935700 0x935700] 0xc0014b24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:46.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:47.051: INFO: rc: 1
Jan 31 11:01:47.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577d10 exit status 1   true [0xc0004362a8 0xc000436370 0xc0004363f0] [0xc0004362a8 0xc000436370 0xc0004363f0] [0xc000436308 0xc0004363d0] [0x935700 0x935700] 0xc0014b2780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:01:57.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:01:57.263: INFO: rc: 1
Jan 31 11:01:57.264: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d06450 exit status 1   true [0xc00000ed50 0xc00000edd0 0xc00000ee08] [0xc00000ed50 0xc00000edd0 0xc00000ee08] [0xc00000eda0 0xc00000edf0] [0x935700 0x935700] 0xc00199afc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:07.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:07.486: INFO: rc: 1
Jan 31 11:02:07.487: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d06570 exit status 1   true [0xc00000ee50 0xc00000eea8 0xc00000ef08] [0xc00000ee50 0xc00000eea8 0xc00000ef08] [0xc00000ee98 0xc00000eef0] [0x935700 0x935700] 0xc00199b260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:17.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:17.723: INFO: rc: 1
Jan 31 11:02:17.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577e60 exit status 1   true [0xc000436438 0xc0004364f8 0xc000436588] [0xc000436438 0xc0004364f8 0xc000436588] [0xc0004364c0 0xc000436580] [0x935700 0x935700] 0xc0014b2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:27.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:27.962: INFO: rc: 1
Jan 31 11:02:27.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d066c0 exit status 1   true [0xc00000ef20 0xc00000ef98 0xc00000efe0] [0xc00000ef20 0xc00000ef98 0xc00000efe0] [0xc00000ef70 0xc00000efb0] [0x935700 0x935700] 0xc00199b500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:37.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:38.166: INFO: rc: 1
Jan 31 11:02:38.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d067e0 exit status 1   true [0xc00000efe8 0xc00000f030 0xc00000f068] [0xc00000efe8 0xc00000f030 0xc00000f068] [0xc00000f018 0xc00000f050] [0x935700 0x935700] 0xc00199b7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:48.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:48.393: INFO: rc: 1
Jan 31 11:02:48.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577fb0 exit status 1   true [0xc0004365c0 0xc000436650 0xc0004366f8] [0xc0004365c0 0xc000436650 0xc0004366f8] [0xc0004365e8 0xc0004366b0] [0x935700 0x935700] 0xc0014b2cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:02:58.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:02:58.600: INFO: rc: 1
Jan 31 11:02:58.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577aa0 exit status 1   true [0xc000436090 0xc000436178 0xc0004361d8] [0xc000436090 0xc000436178 0xc0004361d8] [0xc000436138 0xc0004361c8] [0x935700 0x935700] 0xc0014b2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:08.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:08.782: INFO: rc: 1
Jan 31 11:03:08.783: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577c50 exit status 1   true [0xc0004361e0 0xc000436258 0xc0004362a8] [0xc0004361e0 0xc000436258 0xc0004362a8] [0xc000436210 0xc0004362a0] [0x935700 0x935700] 0xc0014b24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:18.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:18.994: INFO: rc: 1
Jan 31 11:03:18.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca210 exit status 1   true [0xc001226010 0xc001226040 0xc001226058] [0xc001226010 0xc001226040 0xc001226058] [0xc001226038 0xc001226050] [0x935700 0x935700] 0xc0010661e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:28.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:29.207: INFO: rc: 1
Jan 31 11:03:29.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d061b0 exit status 1   true [0xc00000e100 0xc00000e1c8 0xc00000eb98] [0xc00000e100 0xc00000e1c8 0xc00000eb98] [0xc00000e1a8 0xc00000eb80] [0x935700 0x935700] 0xc00199a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:39.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:39.455: INFO: rc: 1
Jan 31 11:03:39.455: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca3c0 exit status 1   true [0xc001226060 0xc001226078 0xc0012260c8] [0xc001226060 0xc001226078 0xc0012260c8] [0xc001226070 0xc0012260b0] [0x935700 0x935700] 0xc001066480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:49.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:49.626: INFO: rc: 1
Jan 31 11:03:49.626: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca570 exit status 1   true [0xc0012260d0 0xc0012260e8 0xc001226120] [0xc0012260d0 0xc0012260e8 0xc001226120] [0xc0012260e0 0xc001226118] [0x935700 0x935700] 0xc001066a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:03:59.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:03:59.930: INFO: rc: 1
Jan 31 11:03:59.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577d70 exit status 1   true [0xc0004362e0 0xc000436378 0xc000436438] [0xc0004362e0 0xc000436378 0xc000436438] [0xc000436370 0xc0004363f0] [0x935700 0x935700] 0xc0014b2780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:04:09.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:04:10.097: INFO: rc: 1
Jan 31 11:04:10.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e56120 exit status 1   true [0xc000498020 0xc0004980e0 0xc0004981e8] [0xc000498020 0xc0004980e0 0xc0004981e8] [0xc0004980d8 0xc000498128] [0x935700 0x935700] 0xc001a782a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:04:20.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:04:20.348: INFO: rc: 1
Jan 31 11:04:20.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca690 exit status 1   true [0xc001226128 0xc001226140 0xc001226190] [0xc001226128 0xc001226140 0xc001226190] [0xc001226138 0xc001226178] [0x935700 0x935700] 0xc001067140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:04:30.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:04:30.609: INFO: rc: 1
Jan 31 11:04:30.610: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e562d0 exit status 1   true [0xc000498208 0xc000498230 0xc0004982c8] [0xc000498208 0xc000498230 0xc0004982c8] [0xc000498220 0xc0004982b0] [0x935700 0x935700] 0xc001596780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:04:40.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:04:40.738: INFO: rc: 1
Jan 31 11:04:40.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000577ec0 exit status 1   true [0xc000436480 0xc000436558 0xc0004365c0] [0xc000436480 0xc000436558 0xc0004365c0] [0xc0004364f8 0xc000436588] [0x935700 0x935700] 0xc0014b2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:04:50.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:04:51.055: INFO: rc: 1
Jan 31 11:04:51.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca810 exit status 1   true [0xc0012261a0 0xc0012261c8 0xc001226218] [0xc0012261a0 0xc0012261c8 0xc001226218] [0xc0012261c0 0xc001226200] [0x935700 0x935700] 0xc0010673e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:05:01.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:05:01.237: INFO: rc: 1
Jan 31 11:05:01.237: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bca960 exit status 1   true [0xc001226220 0xc001226250 0xc001226288] [0xc001226220 0xc001226250 0xc001226288] [0xc001226230 0xc001226270] [0x935700 0x935700] 0xc001067680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:05:11.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:05:12.252: INFO: rc: 1
Jan 31 11:05:12.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d06150 exit status 1   true [0xc000436090 0xc000436178 0xc0004361d8] [0xc000436090 0xc000436178 0xc0004361d8] [0xc000436138 0xc0004361c8] [0x935700 0x935700] 0xc001a782a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 31 11:05:22.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqmbh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 11:05:22.417: INFO: rc: 1
Jan 31 11:05:22.417: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 31 11:05:22.417: INFO: Scaling statefulset ss to 0
Jan 31 11:05:22.443: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 31 11:05:22.447: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wqmbh
Jan 31 11:05:22.452: INFO: Scaling statefulset ss to 0
Jan 31 11:05:22.471: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 11:05:22.476: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:05:22.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wqmbh" for this suite.
Jan 31 11:05:30.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:05:31.052: INFO: namespace: e2e-tests-statefulset-wqmbh, resource: bindings, ignored listing per whitelist
Jan 31 11:05:31.055: INFO: namespace e2e-tests-statefulset-wqmbh deletion completed in 8.509025935s

• [SLOW TEST:383.069 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:05:31.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gcckz
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 11:05:31.328: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 11:06:11.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gcckz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 11:06:11.960: INFO: >>> kubeConfig: /root/.kube/config
I0131 11:06:12.043703       8 log.go:172] (0xc000d3c370) (0xc001501540) Create stream
I0131 11:06:12.043890       8 log.go:172] (0xc000d3c370) (0xc001501540) Stream added, broadcasting: 1
I0131 11:06:12.053637       8 log.go:172] (0xc000d3c370) Reply frame received for 1
I0131 11:06:12.053697       8 log.go:172] (0xc000d3c370) (0xc001a0a0a0) Create stream
I0131 11:06:12.053704       8 log.go:172] (0xc000d3c370) (0xc001a0a0a0) Stream added, broadcasting: 3
I0131 11:06:12.055634       8 log.go:172] (0xc000d3c370) Reply frame received for 3
I0131 11:06:12.055658       8 log.go:172] (0xc000d3c370) (0xc0015015e0) Create stream
I0131 11:06:12.055668       8 log.go:172] (0xc000d3c370) (0xc0015015e0) Stream added, broadcasting: 5
I0131 11:06:12.056735       8 log.go:172] (0xc000d3c370) Reply frame received for 5
I0131 11:06:12.287930       8 log.go:172] (0xc000d3c370) Data frame received for 3
I0131 11:06:12.288047       8 log.go:172] (0xc001a0a0a0) (3) Data frame handling
I0131 11:06:12.288086       8 log.go:172] (0xc001a0a0a0) (3) Data frame sent
I0131 11:06:12.456087       8 log.go:172] (0xc000d3c370) Data frame received for 1
I0131 11:06:12.456476       8 log.go:172] (0xc000d3c370) (0xc001a0a0a0) Stream removed, broadcasting: 3
I0131 11:06:12.456643       8 log.go:172] (0xc001501540) (1) Data frame handling
I0131 11:06:12.456748       8 log.go:172] (0xc001501540) (1) Data frame sent
I0131 11:06:12.457185       8 log.go:172] (0xc000d3c370) (0xc0015015e0) Stream removed, broadcasting: 5
I0131 11:06:12.457608       8 log.go:172] (0xc000d3c370) (0xc001501540) Stream removed, broadcasting: 1
I0131 11:06:12.457722       8 log.go:172] (0xc000d3c370) Go away received
I0131 11:06:12.458235       8 log.go:172] (0xc000d3c370) (0xc001501540) Stream removed, broadcasting: 1
I0131 11:06:12.458279       8 log.go:172] (0xc000d3c370) (0xc001a0a0a0) Stream removed, broadcasting: 3
I0131 11:06:12.458307       8 log.go:172] (0xc000d3c370) (0xc0015015e0) Stream removed, broadcasting: 5
Jan 31 11:06:12.458: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:06:12.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-gcckz" for this suite.
Jan 31 11:06:36.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:06:36.657: INFO: namespace: e2e-tests-pod-network-test-gcckz, resource: bindings, ignored listing per whitelist
Jan 31 11:06:36.754: INFO: namespace e2e-tests-pod-network-test-gcckz deletion completed in 24.272976659s

• [SLOW TEST:65.699 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:06:36.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 11:09:40.515: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:40.584: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:42.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:42.628: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:44.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:44.600: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:46.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:46.639: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:48.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:48.627: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:50.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:50.620: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:52.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:52.601: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:54.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:54.607: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:56.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:56.628: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:09:58.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:09:58.627: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:00.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:00.619: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:02.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:02.617: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:04.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:04.656: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:06.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:06.650: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:08.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:08.615: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:10.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:10.617: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:12.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:12.633: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:14.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:14.623: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:16.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:16.611: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:18.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:18.601: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:20.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:20.618: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:22.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:22.627: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:24.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:24.594: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:26.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:26.622: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:28.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:28.624: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:30.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:30.633: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:32.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:32.610: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:34.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:34.616: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:36.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:36.637: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:38.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:38.625: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:40.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:40.607: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:42.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:42.617: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:44.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:44.672: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:46.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:46.603: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:48.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:48.613: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:50.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:50.604: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:52.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:52.638: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:54.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:54.612: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:56.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:56.612: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:10:58.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:10:58.619: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:00.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:00.624: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:02.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:02.609: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:04.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:04.615: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:06.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:06.612: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:08.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:08.620: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:10.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:10.610: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:12.585: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:12.641: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:14.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:14.631: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:16.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:16.644: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:18.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:18.612: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:20.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:20.617: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:22.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:22.623: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:24.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:24.654: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:26.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:26.609: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 11:11:28.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 11:11:28.618: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:11:28.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f4lzs" for this suite.
Jan 31 11:11:52.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:11:52.772: INFO: namespace: e2e-tests-container-lifecycle-hook-f4lzs, resource: bindings, ignored listing per whitelist
Jan 31 11:11:52.824: INFO: namespace e2e-tests-container-lifecycle-hook-f4lzs deletion completed in 24.189658096s

• [SLOW TEST:316.070 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:11:52.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 31 11:11:53.178: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:12:11.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-hbqg8" for this suite.
Jan 31 11:12:17.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:12:17.996: INFO: namespace: e2e-tests-init-container-hbqg8, resource: bindings, ignored listing per whitelist
Jan 31 11:12:18.111: INFO: namespace e2e-tests-init-container-hbqg8 deletion completed in 6.276076975s

• [SLOW TEST:25.287 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:12:18.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:12:18.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-2gs8f" for this suite.
Jan 31 11:12:24.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:12:24.534: INFO: namespace: e2e-tests-services-2gs8f, resource: bindings, ignored listing per whitelist
Jan 31 11:12:25.468: INFO: namespace e2e-tests-services-2gs8f deletion completed in 7.1250628s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:7.356 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:12:25.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8f7e29aa-441a-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:12:25.629: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-2cxfm" to be "success or failure"
Jan 31 11:12:25.644: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.55001ms
Jan 31 11:12:28.213: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583397714s
Jan 31 11:12:30.246: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616498246s
Jan 31 11:12:32.263: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633437832s
Jan 31 11:12:34.274: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.644337029s
STEP: Saw pod success
Jan 31 11:12:34.274: INFO: Pod "pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:12:34.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 11:12:34.349: INFO: Waiting for pod pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005 to disappear
Jan 31 11:12:34.360: INFO: Pod pod-projected-configmaps-8f7edfd8-441a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:12:34.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2cxfm" for this suite.
Jan 31 11:12:40.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:12:40.561: INFO: namespace: e2e-tests-projected-2cxfm, resource: bindings, ignored listing per whitelist
Jan 31 11:12:40.663: INFO: namespace e2e-tests-projected-2cxfm deletion completed in 6.295117381s

• [SLOW TEST:15.195 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:12:40.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 31 11:12:40.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:43.124: INFO: stderr: ""
Jan 31 11:12:43.124: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 11:12:43.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:43.360: INFO: stderr: ""
Jan 31 11:12:43.360: INFO: stdout: "update-demo-nautilus-hzz65 update-demo-nautilus-zqqt7 "
Jan 31 11:12:43.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzz65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:43.547: INFO: stderr: ""
Jan 31 11:12:43.548: INFO: stdout: ""
Jan 31 11:12:43.548: INFO: update-demo-nautilus-hzz65 is created but not running
Jan 31 11:12:48.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:48.786: INFO: stderr: ""
Jan 31 11:12:48.786: INFO: stdout: "update-demo-nautilus-hzz65 update-demo-nautilus-zqqt7 "
Jan 31 11:12:48.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzz65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:48.936: INFO: stderr: ""
Jan 31 11:12:48.936: INFO: stdout: ""
Jan 31 11:12:48.936: INFO: update-demo-nautilus-hzz65 is created but not running
Jan 31 11:12:53.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:54.206: INFO: stderr: ""
Jan 31 11:12:54.206: INFO: stdout: "update-demo-nautilus-hzz65 update-demo-nautilus-zqqt7 "
Jan 31 11:12:54.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzz65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:54.369: INFO: stderr: ""
Jan 31 11:12:54.369: INFO: stdout: ""
Jan 31 11:12:54.369: INFO: update-demo-nautilus-hzz65 is created but not running
Jan 31 11:12:59.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:59.580: INFO: stderr: ""
Jan 31 11:12:59.581: INFO: stdout: "update-demo-nautilus-hzz65 update-demo-nautilus-zqqt7 "
Jan 31 11:12:59.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzz65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:59.782: INFO: stderr: ""
Jan 31 11:12:59.782: INFO: stdout: "true"
Jan 31 11:12:59.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzz65 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:12:59.938: INFO: stderr: ""
Jan 31 11:12:59.938: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 11:12:59.938: INFO: validating pod update-demo-nautilus-hzz65
Jan 31 11:12:59.969: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 11:12:59.970: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 11:12:59.970: INFO: update-demo-nautilus-hzz65 is verified up and running
Jan 31 11:12:59.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqqt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:00.161: INFO: stderr: ""
Jan 31 11:13:00.161: INFO: stdout: "true"
Jan 31 11:13:00.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zqqt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:00.268: INFO: stderr: ""
Jan 31 11:13:00.268: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 11:13:00.268: INFO: validating pod update-demo-nautilus-zqqt7
Jan 31 11:13:00.277: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 11:13:00.277: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 11:13:00.277: INFO: update-demo-nautilus-zqqt7 is verified up and running
STEP: rolling-update to new replication controller
Jan 31 11:13:00.279: INFO: scanned /root for discovery docs: 
Jan 31 11:13:00.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:36.450: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 11:13:36.451: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 11:13:36.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:36.697: INFO: stderr: ""
Jan 31 11:13:36.697: INFO: stdout: "update-demo-kitten-6c2gx update-demo-kitten-pv6gp "
Jan 31 11:13:36.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6c2gx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:36.859: INFO: stderr: ""
Jan 31 11:13:36.859: INFO: stdout: "true"
Jan 31 11:13:36.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6c2gx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:37.023: INFO: stderr: ""
Jan 31 11:13:37.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 11:13:37.023: INFO: validating pod update-demo-kitten-6c2gx
Jan 31 11:13:37.111: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 11:13:37.111: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 11:13:37.111: INFO: update-demo-kitten-6c2gx is verified up and running
Jan 31 11:13:37.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pv6gp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:37.255: INFO: stderr: ""
Jan 31 11:13:37.255: INFO: stdout: "true"
Jan 31 11:13:37.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pv6gp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ss6ck'
Jan 31 11:13:37.418: INFO: stderr: ""
Jan 31 11:13:37.418: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 11:13:37.418: INFO: validating pod update-demo-kitten-pv6gp
Jan 31 11:13:37.428: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 11:13:37.428: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 11:13:37.428: INFO: update-demo-kitten-pv6gp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:13:37.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ss6ck" for this suite.
Jan 31 11:14:03.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:14:03.679: INFO: namespace: e2e-tests-kubectl-ss6ck, resource: bindings, ignored listing per whitelist
Jan 31 11:14:03.717: INFO: namespace e2e-tests-kubectl-ss6ck deletion completed in 26.284538658s

• [SLOW TEST:83.054 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:14:03.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:14:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v64v9" for this suite.
Jan 31 11:14:18.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:14:18.435: INFO: namespace: e2e-tests-emptydir-wrapper-v64v9, resource: bindings, ignored listing per whitelist
Jan 31 11:14:18.445: INFO: namespace e2e-tests-emptydir-wrapper-v64v9 deletion completed in 6.187316194s

• [SLOW TEST:14.728 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:14:18.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 11:14:18.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-g2kcg" to be "success or failure"
Jan 31 11:14:18.720: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.279071ms
Jan 31 11:14:20.734: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030118593s
Jan 31 11:14:22.777: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073390755s
Jan 31 11:14:24.806: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102229005s
Jan 31 11:14:26.815: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111485585s
Jan 31 11:14:28.844: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140248644s
STEP: Saw pod success
Jan 31 11:14:28.844: INFO: Pod "downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:14:28.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 11:14:29.161: INFO: Waiting for pod downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005 to disappear
Jan 31 11:14:29.170: INFO: Pod downwardapi-volume-d2eee3fc-441a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:14:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g2kcg" for this suite.
Jan 31 11:14:35.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:14:35.317: INFO: namespace: e2e-tests-downward-api-g2kcg, resource: bindings, ignored listing per whitelist
Jan 31 11:14:35.382: INFO: namespace e2e-tests-downward-api-g2kcg deletion completed in 6.204108797s

• [SLOW TEST:16.936 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:14:35.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-dd006d2d-441a-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:14:35.616: INFO: Waiting up to 5m0s for pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-sgrh4" to be "success or failure"
Jan 31 11:14:35.625: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.212813ms
Jan 31 11:14:37.636: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020200755s
Jan 31 11:14:39.649: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033083341s
Jan 31 11:14:41.673: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056750389s
Jan 31 11:14:43.882: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265593406s
Jan 31 11:14:45.978: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.361743102s
STEP: Saw pod success
Jan 31 11:14:45.978: INFO: Pod "pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:14:45.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 11:14:46.318: INFO: Waiting for pod pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005 to disappear
Jan 31 11:14:46.393: INFO: Pod pod-secrets-dd0273ee-441a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:14:46.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sgrh4" for this suite.
Jan 31 11:14:52.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:14:52.682: INFO: namespace: e2e-tests-secrets-sgrh4, resource: bindings, ignored listing per whitelist
Jan 31 11:14:52.779: INFO: namespace e2e-tests-secrets-sgrh4 deletion completed in 6.375983085s

• [SLOW TEST:17.396 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:14:52.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 31 11:14:52.935: INFO: Waiting up to 5m0s for pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-gxs9s" to be "success or failure"
Jan 31 11:14:52.949: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.871958ms
Jan 31 11:14:54.969: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033728137s
Jan 31 11:14:56.989: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053110629s
Jan 31 11:14:59.805: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.869418104s
Jan 31 11:15:01.828: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89299305s
Jan 31 11:15:03.854: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.918184874s
STEP: Saw pod success
Jan 31 11:15:03.854: INFO: Pod "downward-api-e7564a59-441a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:15:03.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e7564a59-441a-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 11:15:04.434: INFO: Waiting for pod downward-api-e7564a59-441a-11ea-aae6-0242ac110005 to disappear
Jan 31 11:15:04.463: INFO: Pod downward-api-e7564a59-441a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:15:04.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gxs9s" for this suite.
Jan 31 11:15:10.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:15:10.655: INFO: namespace: e2e-tests-downward-api-gxs9s, resource: bindings, ignored listing per whitelist
Jan 31 11:15:10.834: INFO: namespace e2e-tests-downward-api-gxs9s deletion completed in 6.352170816s

• [SLOW TEST:18.055 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:15:10.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 11:15:11.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hg6t4'
Jan 31 11:15:11.241: INFO: stderr: ""
Jan 31 11:15:11.242: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 31 11:15:11.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hg6t4'
Jan 31 11:15:16.208: INFO: stderr: ""
Jan 31 11:15:16.209: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:15:16.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hg6t4" for this suite.
Jan 31 11:15:22.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:15:22.653: INFO: namespace: e2e-tests-kubectl-hg6t4, resource: bindings, ignored listing per whitelist
Jan 31 11:15:22.677: INFO: namespace e2e-tests-kubectl-hg6t4 deletion completed in 6.305282226s

• [SLOW TEST:11.842 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:15:22.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:15:48.897: INFO: Container started at 2020-01-31 11:15:30 +0000 UTC, pod became ready at 2020-01-31 11:15:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:15:48.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gx8g5" for this suite.
Jan 31 11:16:12.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:16:13.121: INFO: namespace: e2e-tests-container-probe-gx8g5, resource: bindings, ignored listing per whitelist
Jan 31 11:16:13.151: INFO: namespace e2e-tests-container-probe-gx8g5 deletion completed in 24.241466773s

• [SLOW TEST:50.474 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:16:13.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-1739ad62-441b-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:16:13.387: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-h2znk" to be "success or failure"
Jan 31 11:16:13.396: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.306942ms
Jan 31 11:16:15.416: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028560773s
Jan 31 11:16:17.447: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05986036s
Jan 31 11:16:19.798: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41094191s
Jan 31 11:16:21.813: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42592582s
Jan 31 11:16:23.848: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460843612s
STEP: Saw pod success
Jan 31 11:16:23.848: INFO: Pod "pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:16:23.870: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 11:16:24.804: INFO: Waiting for pod pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:16:24.823: INFO: Pod pod-projected-secrets-173da1a8-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:16:24.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h2znk" for this suite.
Jan 31 11:16:30.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:16:31.076: INFO: namespace: e2e-tests-projected-h2znk, resource: bindings, ignored listing per whitelist
Jan 31 11:16:31.076: INFO: namespace e2e-tests-projected-h2znk deletion completed in 6.246172486s

• [SLOW TEST:17.925 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:16:31.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005
Jan 31 11:16:31.303: INFO: Pod name my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005: Found 0 pods out of 1
Jan 31 11:16:36.454: INFO: Pod name my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005: Found 1 pods out of 1
Jan 31 11:16:36.454: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005" are running
Jan 31 11:16:42.494: INFO: Pod "my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005-vmhjd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 11:16:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 11:16:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 11:16:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 11:16:31 +0000 UTC Reason: Message:}])
Jan 31 11:16:42.495: INFO: Trying to dial the pod
Jan 31 11:16:47.548: INFO: Controller my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005-vmhjd]: "my-hostname-basic-21ef35fc-441b-11ea-aae6-0242ac110005-vmhjd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:16:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-gmp6b" for this suite.
Jan 31 11:16:53.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:16:53.859: INFO: namespace: e2e-tests-replication-controller-gmp6b, resource: bindings, ignored listing per whitelist
Jan 31 11:16:53.877: INFO: namespace e2e-tests-replication-controller-gmp6b deletion completed in 6.321885403s

• [SLOW TEST:22.801 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:16:53.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-srmj5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-srmj5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 22.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.22_udp@PTR;check="$$(dig +tcp +noall +answer +search 22.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.22_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-srmj5;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-srmj5;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-srmj5.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-srmj5.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-srmj5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 22.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.22_udp@PTR;check="$$(dig +tcp +noall +answer +search 22.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.22_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 11:17:10.656: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.666: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.767: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-srmj5 from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.785: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5 from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.792: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.799: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.803: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.807: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.811: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.814: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.818: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.824: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.828: INFO: Unable to read 10.110.98.22_udp@PTR from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.833: INFO: Unable to read 10.110.98.22_tcp@PTR from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.836: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.839: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.842: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-srmj5 from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.846: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-srmj5 from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.851: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.855: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.864: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.868: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.872: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.876: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.880: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.884: INFO: Unable to read 10.110.98.22_udp@PTR from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.888: INFO: Unable to read 10.110.98.22_tcp@PTR from pod e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-2fb438cb-441b-11ea-aae6-0242ac110005)
Jan 31 11:17:10.888: INFO: Lookups using e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-srmj5 wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5 wheezy_udp@dns-test-service.e2e-tests-dns-srmj5.svc wheezy_tcp@dns-test-service.e2e-tests-dns-srmj5.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.98.22_udp@PTR 10.110.98.22_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-srmj5 jessie_tcp@dns-test-service.e2e-tests-dns-srmj5 jessie_udp@dns-test-service.e2e-tests-dns-srmj5.svc jessie_tcp@dns-test-service.e2e-tests-dns-srmj5.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-srmj5.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-srmj5.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.98.22_udp@PTR 10.110.98.22_tcp@PTR]

Jan 31 11:17:16.051: INFO: DNS probes using e2e-tests-dns-srmj5/dns-test-2fb438cb-441b-11ea-aae6-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:17:16.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-srmj5" for this suite.
Jan 31 11:17:24.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:17:24.725: INFO: namespace: e2e-tests-dns-srmj5, resource: bindings, ignored listing per whitelist
Jan 31 11:17:24.790: INFO: namespace e2e-tests-dns-srmj5 deletion completed in 8.299813138s

• [SLOW TEST:30.912 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:17:24.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 31 11:17:24.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:25.473: INFO: stderr: ""
Jan 31 11:17:25.474: INFO: stdout: "pod/pause created\n"
Jan 31 11:17:25.474: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 31 11:17:25.474: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-h9brd" to be "running and ready"
Jan 31 11:17:25.567: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 92.697917ms
Jan 31 11:17:27.581: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106863744s
Jan 31 11:17:29.607: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132843482s
Jan 31 11:17:32.470: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996422097s
Jan 31 11:17:34.502: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 9.027950759s
Jan 31 11:17:34.503: INFO: Pod "pause" satisfied condition "running and ready"
Jan 31 11:17:34.503: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 31 11:17:34.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:34.700: INFO: stderr: ""
Jan 31 11:17:34.700: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 31 11:17:34.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:34.836: INFO: stderr: ""
Jan 31 11:17:34.836: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 31 11:17:34.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:35.087: INFO: stderr: ""
Jan 31 11:17:35.087: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 31 11:17:35.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:35.279: INFO: stderr: ""
Jan 31 11:17:35.279: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 31 11:17:35.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:35.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 11:17:35.499: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 31 11:17:35.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-h9brd'
Jan 31 11:17:35.692: INFO: stderr: "No resources found.\n"
Jan 31 11:17:35.692: INFO: stdout: ""
Jan 31 11:17:35.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-h9brd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 11:17:35.811: INFO: stderr: ""
Jan 31 11:17:35.812: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:17:35.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h9brd" for this suite.
Jan 31 11:17:41.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:17:42.010: INFO: namespace: e2e-tests-kubectl-h9brd, resource: bindings, ignored listing per whitelist
Jan 31 11:17:42.025: INFO: namespace e2e-tests-kubectl-h9brd deletion completed in 6.197568787s

• [SLOW TEST:17.235 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:17:42.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 11:17:42.203: INFO: Waiting up to 5m0s for pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-fjv2w" to be "success or failure"
Jan 31 11:17:42.209: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.672859ms
Jan 31 11:17:44.240: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036105934s
Jan 31 11:17:46.256: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052924905s
Jan 31 11:17:48.636: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432825082s
Jan 31 11:17:50.667: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.463374716s
STEP: Saw pod success
Jan 31 11:17:50.667: INFO: Pod "pod-4c38c95e-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:17:50.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4c38c95e-441b-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 11:17:50.910: INFO: Waiting for pod pod-4c38c95e-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:17:51.007: INFO: Pod pod-4c38c95e-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:17:51.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fjv2w" for this suite.
Jan 31 11:17:57.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:17:57.331: INFO: namespace: e2e-tests-emptydir-fjv2w, resource: bindings, ignored listing per whitelist
Jan 31 11:17:57.337: INFO: namespace e2e-tests-emptydir-fjv2w deletion completed in 6.309717098s

• [SLOW TEST:15.312 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:17:57.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 11:18:15.720: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 11:18:15.739: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 11:18:17.739: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 11:18:17.788: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 11:18:19.739: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 11:18:20.235: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 11:18:21.739: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 11:18:21.763: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 11:18:23.739: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 11:18:23.766: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:18:23.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-b7lfv" for this suite.
Jan 31 11:18:47.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:18:48.090: INFO: namespace: e2e-tests-container-lifecycle-hook-b7lfv, resource: bindings, ignored listing per whitelist
Jan 31 11:18:48.179: INFO: namespace e2e-tests-container-lifecycle-hook-b7lfv deletion completed in 24.251001328s

• [SLOW TEST:50.841 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:18:48.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 31 11:18:57.367: INFO: Successfully updated pod "annotationupdate73ddc32e-441b-11ea-aae6-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:18:59.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v5zvb" for this suite.
Jan 31 11:19:23.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:19:23.683: INFO: namespace: e2e-tests-projected-v5zvb, resource: bindings, ignored listing per whitelist
Jan 31 11:19:23.739: INFO: namespace e2e-tests-projected-v5zvb deletion completed in 24.277306763s

• [SLOW TEST:35.560 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:19:23.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 11:19:24.171: INFO: Waiting up to 5m0s for pod "pod-88f093df-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-6hxm7" to be "success or failure"
Jan 31 11:19:24.180: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.732658ms
Jan 31 11:19:26.210: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038776459s
Jan 31 11:19:28.229: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057572958s
Jan 31 11:19:30.245: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073995344s
Jan 31 11:19:32.816: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.644751669s
Jan 31 11:19:34.847: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.676518595s
STEP: Saw pod success
Jan 31 11:19:34.848: INFO: Pod "pod-88f093df-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:19:34.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-88f093df-441b-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 11:19:34.979: INFO: Waiting for pod pod-88f093df-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:19:35.005: INFO: Pod pod-88f093df-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:19:35.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6hxm7" for this suite.
Jan 31 11:19:41.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:19:41.372: INFO: namespace: e2e-tests-emptydir-6hxm7, resource: bindings, ignored listing per whitelist
Jan 31 11:19:41.435: INFO: namespace e2e-tests-emptydir-6hxm7 deletion completed in 6.412230156s

• [SLOW TEST:17.695 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:19:41.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0131 11:20:12.589038       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 11:20:12.589: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:20:12.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ft7cf" for this suite.
Jan 31 11:20:20.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:20:21.062: INFO: namespace: e2e-tests-gc-ft7cf, resource: bindings, ignored listing per whitelist
Jan 31 11:20:21.259: INFO: namespace e2e-tests-gc-ft7cf deletion completed in 8.646681078s

• [SLOW TEST:39.824 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:20:21.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-dxfs
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 11:20:21.816: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dxfs" in namespace "e2e-tests-subpath-hwnrb" to be "success or failure"
Jan 31 11:20:21.956: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 139.439796ms
Jan 31 11:20:24.071: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25424688s
Jan 31 11:20:26.083: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266094703s
Jan 31 11:20:28.113: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295960967s
Jan 31 11:20:30.153: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336425968s
Jan 31 11:20:32.173: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.355823444s
Jan 31 11:20:34.189: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372342827s
Jan 31 11:20:36.215: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.39841977s
Jan 31 11:20:38.407: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=true. Elapsed: 16.59023902s
Jan 31 11:20:40.430: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 18.612967382s
Jan 31 11:20:42.456: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 20.639253482s
Jan 31 11:20:44.473: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 22.656276532s
Jan 31 11:20:46.502: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 24.685369128s
Jan 31 11:20:48.546: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 26.728852813s
Jan 31 11:20:50.580: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 28.763028596s
Jan 31 11:20:52.619: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 30.801704097s
Jan 31 11:20:54.632: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 32.815062363s
Jan 31 11:20:56.683: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Running", Reason="", readiness=false. Elapsed: 34.866345948s
Jan 31 11:20:58.701: INFO: Pod "pod-subpath-test-secret-dxfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.883600821s
STEP: Saw pod success
Jan 31 11:20:58.701: INFO: Pod "pod-subpath-test-secret-dxfs" satisfied condition "success or failure"
Jan 31 11:20:58.705: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-dxfs container test-container-subpath-secret-dxfs: 
STEP: delete the pod
Jan 31 11:20:58.789: INFO: Waiting for pod pod-subpath-test-secret-dxfs to disappear
Jan 31 11:20:58.796: INFO: Pod pod-subpath-test-secret-dxfs no longer exists
STEP: Deleting pod pod-subpath-test-secret-dxfs
Jan 31 11:20:58.796: INFO: Deleting pod "pod-subpath-test-secret-dxfs" in namespace "e2e-tests-subpath-hwnrb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:20:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hwnrb" for this suite.
Jan 31 11:21:04.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:21:05.173: INFO: namespace: e2e-tests-subpath-hwnrb, resource: bindings, ignored listing per whitelist
Jan 31 11:21:05.179: INFO: namespace e2e-tests-subpath-hwnrb deletion completed in 6.370092566s

• [SLOW TEST:43.919 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:21:05.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 31 11:21:05.492: INFO: Waiting up to 5m0s for pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-zcfmm" to be "success or failure"
Jan 31 11:21:05.502: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.096339ms
Jan 31 11:21:07.521: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029063218s
Jan 31 11:21:09.538: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045872138s
Jan 31 11:21:11.559: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066887101s
Jan 31 11:21:13.610: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11758059s
STEP: Saw pod success
Jan 31 11:21:13.611: INFO: Pod "downward-api-c55bb902-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:21:13.625: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c55bb902-441b-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 11:21:13.798: INFO: Waiting for pod downward-api-c55bb902-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:21:13.815: INFO: Pod downward-api-c55bb902-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:21:13.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zcfmm" for this suite.
Jan 31 11:21:19.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:21:19.938: INFO: namespace: e2e-tests-downward-api-zcfmm, resource: bindings, ignored listing per whitelist
Jan 31 11:21:20.004: INFO: namespace e2e-tests-downward-api-zcfmm deletion completed in 6.179214721s

• [SLOW TEST:14.826 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:21:20.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 31 11:21:20.261: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-lvhzr" to be "success or failure"
Jan 31 11:21:20.350: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 88.924627ms
Jan 31 11:21:22.373: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11184518s
Jan 31 11:21:25.095: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.833309367s
Jan 31 11:21:27.118: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.856562836s
Jan 31 11:21:29.154: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.892373854s
Jan 31 11:21:31.174: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.912460874s
Jan 31 11:21:33.195: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.933981299s
STEP: Saw pod success
Jan 31 11:21:33.196: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 31 11:21:33.202: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 31 11:21:33.318: INFO: Waiting for pod pod-host-path-test to disappear
Jan 31 11:21:33.337: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:21:33.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-lvhzr" for this suite.
Jan 31 11:21:39.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:21:39.441: INFO: namespace: e2e-tests-hostpath-lvhzr, resource: bindings, ignored listing per whitelist
Jan 31 11:21:39.782: INFO: namespace e2e-tests-hostpath-lvhzr deletion completed in 6.42472405s

• [SLOW TEST:19.776 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:21:39.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 31 11:21:39.992: INFO: Waiting up to 5m0s for pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-var-expansion-tqkhn" to be "success or failure"
Jan 31 11:21:40.006: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.598839ms
Jan 31 11:21:42.228: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235680311s
Jan 31 11:21:44.248: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255897703s
Jan 31 11:21:46.354: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36150233s
Jan 31 11:21:48.407: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.414656084s
STEP: Saw pod success
Jan 31 11:21:48.407: INFO: Pod "var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:21:48.414: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 11:21:48.621: INFO: Waiting for pod var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:21:48.659: INFO: Pod var-expansion-d9f584f1-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:21:48.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-tqkhn" for this suite.
Jan 31 11:21:56.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:21:56.998: INFO: namespace: e2e-tests-var-expansion-tqkhn, resource: bindings, ignored listing per whitelist
Jan 31 11:21:57.132: INFO: namespace e2e-tests-var-expansion-tqkhn deletion completed in 8.354768862s

• [SLOW TEST:17.349 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:21:57.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 31 11:21:57.389: INFO: Waiting up to 5m0s for pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005" in namespace "e2e-tests-containers-fzxh4" to be "success or failure"
Jan 31 11:21:57.396: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657176ms
Jan 31 11:21:59.406: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016153866s
Jan 31 11:22:01.420: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030820561s
Jan 31 11:22:03.436: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046726384s
Jan 31 11:22:05.454: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06402037s
Jan 31 11:22:07.474: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084581918s
STEP: Saw pod success
Jan 31 11:22:07.475: INFO: Pod "client-containers-e45220ec-441b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:22:07.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e45220ec-441b-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 11:22:08.458: INFO: Waiting for pod client-containers-e45220ec-441b-11ea-aae6-0242ac110005 to disappear
Jan 31 11:22:08.739: INFO: Pod client-containers-e45220ec-441b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:22:08.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fzxh4" for this suite.
Jan 31 11:22:14.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:22:14.865: INFO: namespace: e2e-tests-containers-fzxh4, resource: bindings, ignored listing per whitelist
Jan 31 11:22:15.037: INFO: namespace e2e-tests-containers-fzxh4 deletion completed in 6.278204445s

• [SLOW TEST:17.905 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:22:15.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wv2cg
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 31 11:22:15.538: INFO: Found 0 stateful pods, waiting for 3
Jan 31 11:22:25.645: INFO: Found 2 stateful pods, waiting for 3
Jan 31 11:22:35.561: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:22:35.561: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:22:35.561: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 11:22:45.559: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:22:45.559: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:22:45.559: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 31 11:22:45.607: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 31 11:22:55.830: INFO: Updating stateful set ss2
Jan 31 11:22:55.869: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 11:23:05.914: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 31 11:23:17.943: INFO: Found 2 stateful pods, waiting for 3
Jan 31 11:23:28.118: INFO: Found 2 stateful pods, waiting for 3
Jan 31 11:23:37.961: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:23:37.961: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:23:37.961: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 11:23:47.964: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:23:47.964: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 11:23:47.964: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 31 11:23:48.025: INFO: Updating stateful set ss2
Jan 31 11:23:48.050: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 11:23:58.076: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 11:24:08.107: INFO: Updating stateful set ss2
Jan 31 11:24:08.822: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv2cg/ss2 to complete update
Jan 31 11:24:08.823: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 11:24:18.985: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv2cg/ss2 to complete update
Jan 31 11:24:18.986: INFO: Waiting for Pod e2e-tests-statefulset-wv2cg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 11:24:28.843: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv2cg/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 31 11:24:38.845: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wv2cg
Jan 31 11:24:38.866: INFO: Scaling statefulset ss2 to 0
Jan 31 11:25:09.019: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 11:25:09.026: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:25:09.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wv2cg" for this suite.
Jan 31 11:25:17.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:25:17.212: INFO: namespace: e2e-tests-statefulset-wv2cg, resource: bindings, ignored listing per whitelist
Jan 31 11:25:17.412: INFO: namespace e2e-tests-statefulset-wv2cg deletion completed in 8.287440904s

• [SLOW TEST:182.374 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:25:17.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:25:17.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 31 11:25:17.837: INFO: stderr: ""
Jan 31 11:25:17.837: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 31 11:25:17.844: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:25:17.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mdgwj" for this suite.
Jan 31 11:25:23.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:25:24.192: INFO: namespace: e2e-tests-kubectl-mdgwj, resource: bindings, ignored listing per whitelist
Jan 31 11:25:24.212: INFO: namespace e2e-tests-kubectl-mdgwj deletion completed in 6.351050784s

S [SKIPPING] [6.799 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 31 11:25:17.844: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:25:24.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 31 11:25:26.993: INFO: Pod name wrapped-volume-race-613dae99-441c-11ea-aae6-0242ac110005: Found 0 pods out of 5
Jan 31 11:25:32.014: INFO: Pod name wrapped-volume-race-613dae99-441c-11ea-aae6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-613dae99-441c-11ea-aae6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mfwnw, will wait for the garbage collector to delete the pods
Jan 31 11:27:26.151: INFO: Deleting ReplicationController wrapped-volume-race-613dae99-441c-11ea-aae6-0242ac110005 took: 22.116128ms
Jan 31 11:27:26.452: INFO: Terminating ReplicationController wrapped-volume-race-613dae99-441c-11ea-aae6-0242ac110005 pods took: 301.144208ms
STEP: Creating RC which spawns configmap-volume pods
Jan 31 11:28:12.901: INFO: Pod name wrapped-volume-race-c412679a-441c-11ea-aae6-0242ac110005: Found 0 pods out of 5
Jan 31 11:28:17.983: INFO: Pod name wrapped-volume-race-c412679a-441c-11ea-aae6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c412679a-441c-11ea-aae6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mfwnw, will wait for the garbage collector to delete the pods
Jan 31 11:30:20.110: INFO: Deleting ReplicationController wrapped-volume-race-c412679a-441c-11ea-aae6-0242ac110005 took: 22.055753ms
Jan 31 11:30:20.511: INFO: Terminating ReplicationController wrapped-volume-race-c412679a-441c-11ea-aae6-0242ac110005 pods took: 401.26396ms
STEP: Creating RC which spawns configmap-volume pods
Jan 31 11:31:13.285: INFO: Pod name wrapped-volume-race-2fa23c2d-441d-11ea-aae6-0242ac110005: Found 0 pods out of 5
Jan 31 11:31:18.305: INFO: Pod name wrapped-volume-race-2fa23c2d-441d-11ea-aae6-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2fa23c2d-441d-11ea-aae6-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mfwnw, will wait for the garbage collector to delete the pods
Jan 31 11:33:32.533: INFO: Deleting ReplicationController wrapped-volume-race-2fa23c2d-441d-11ea-aae6-0242ac110005 took: 81.792326ms
Jan 31 11:33:32.935: INFO: Terminating ReplicationController wrapped-volume-race-2fa23c2d-441d-11ea-aae6-0242ac110005 pods took: 401.821995ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:34:25.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mfwnw" for this suite.
Jan 31 11:34:33.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:34:33.536: INFO: namespace: e2e-tests-emptydir-wrapper-mfwnw, resource: bindings, ignored listing per whitelist
Jan 31 11:34:33.553: INFO: namespace e2e-tests-emptydir-wrapper-mfwnw deletion completed in 8.230473092s

• [SLOW TEST:549.341 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:34:33.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 11:34:33.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xqtwd'
Jan 31 11:34:35.700: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 11:34:35.701: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 31 11:34:39.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xqtwd'
Jan 31 11:34:41.971: INFO: stderr: ""
Jan 31 11:34:41.972: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:34:41.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xqtwd" for this suite.
Jan 31 11:34:51.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:34:51.320: INFO: namespace: e2e-tests-kubectl-xqtwd, resource: bindings, ignored listing per whitelist
Jan 31 11:34:51.472: INFO: namespace e2e-tests-kubectl-xqtwd deletion completed in 9.490986661s

• [SLOW TEST:17.919 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:34:51.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b1ee0b22-441d-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:34:51.951: INFO: Waiting up to 5m0s for pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-zvctc" to be "success or failure"
Jan 31 11:34:51.969: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.852478ms
Jan 31 11:34:53.981: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030325766s
Jan 31 11:34:56.007: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056278987s
Jan 31 11:34:58.172: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221500162s
Jan 31 11:35:00.183: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23253053s
Jan 31 11:35:02.215: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264120287s
STEP: Saw pod success
Jan 31 11:35:02.215: INFO: Pod "pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:35:02.227: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 11:35:02.305: INFO: Waiting for pod pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005 to disappear
Jan 31 11:35:02.312: INFO: Pod pod-secrets-b1f10aa3-441d-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:35:02.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zvctc" for this suite.
Jan 31 11:35:08.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:35:08.454: INFO: namespace: e2e-tests-secrets-zvctc, resource: bindings, ignored listing per whitelist
Jan 31 11:35:08.567: INFO: namespace e2e-tests-secrets-zvctc deletion completed in 6.244403507s

• [SLOW TEST:17.094 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:35:08.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 31 11:35:08.793: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 11:35:08.806: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 11:35:08.816: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 31 11:35:08.844: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:35:08.844: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 31 11:35:08.844: INFO: 	Container weave ready: true, restart count 0
Jan 31 11:35:08.844: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 11:35:08.844: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:35:08.844: INFO: 	Container coredns ready: true, restart count 0
Jan 31 11:35:08.845: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:35:08.845: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:35:08.845: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:35:08.845: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:35:08.845: INFO: 	Container coredns ready: true, restart count 0
Jan 31 11:35:08.845: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 31 11:35:08.845: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eef58de11a0fd5], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:35:09.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-hrhsl" for this suite.
Jan 31 11:35:16.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:35:16.109: INFO: namespace: e2e-tests-sched-pred-hrhsl, resource: bindings, ignored listing per whitelist
Jan 31 11:35:16.164: INFO: namespace e2e-tests-sched-pred-hrhsl deletion completed in 6.173500616s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.597 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:35:16.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 11:35:36.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:36.621: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:38.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:38.655: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:40.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:40.656: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:42.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:42.637: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:44.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:44.645: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:46.623: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:46.668: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:48.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:48.642: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:50.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:50.640: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:52.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:52.643: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:54.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:54.671: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:56.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:56.818: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:35:58.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:35:58.637: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:36:00.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:36:00.636: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 31 11:36:02.622: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 31 11:36:02.738: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:36:02.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-djfgw" for this suite.
Jan 31 11:36:26.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:36:26.973: INFO: namespace: e2e-tests-container-lifecycle-hook-djfgw, resource: bindings, ignored listing per whitelist
Jan 31 11:36:27.105: INFO: namespace e2e-tests-container-lifecycle-hook-djfgw deletion completed in 24.243156877s

• [SLOW TEST:70.940 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:36:27.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 11:36:27.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vjqsn'
Jan 31 11:36:27.695: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 11:36:27.695: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 31 11:36:27.718: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-8dvbs]
Jan 31 11:36:27.718: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-8dvbs" in namespace "e2e-tests-kubectl-vjqsn" to be "running and ready"
Jan 31 11:36:27.855: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Pending", Reason="", readiness=false. Elapsed: 136.396566ms
Jan 31 11:36:29.887: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168449s
Jan 31 11:36:31.931: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212351266s
Jan 31 11:36:33.944: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226004686s
Jan 31 11:36:35.959: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240962045s
Jan 31 11:36:38.023: INFO: Pod "e2e-test-nginx-rc-8dvbs": Phase="Running", Reason="", readiness=true. Elapsed: 10.304394753s
Jan 31 11:36:38.023: INFO: Pod "e2e-test-nginx-rc-8dvbs" satisfied condition "running and ready"
Jan 31 11:36:38.023: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-8dvbs]
Jan 31 11:36:38.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vjqsn'
Jan 31 11:36:38.554: INFO: stderr: ""
Jan 31 11:36:38.555: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 31 11:36:38.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vjqsn'
Jan 31 11:36:38.899: INFO: stderr: ""
Jan 31 11:36:38.899: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:36:38.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vjqsn" for this suite.
Jan 31 11:37:02.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:37:03.132: INFO: namespace: e2e-tests-kubectl-vjqsn, resource: bindings, ignored listing per whitelist
Jan 31 11:37:03.182: INFO: namespace e2e-tests-kubectl-vjqsn deletion completed in 24.238803797s

• [SLOW TEST:36.076 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:37:03.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-blpp5
Jan 31 11:37:13.445: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-blpp5
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 11:37:13.450: INFO: Initial restart count of pod liveness-exec is 0
Jan 31 11:38:04.747: INFO: Restart count of pod e2e-tests-container-probe-blpp5/liveness-exec is now 1 (51.297017791s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:38:04.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-blpp5" for this suite.
Jan 31 11:38:10.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:38:11.122: INFO: namespace: e2e-tests-container-probe-blpp5, resource: bindings, ignored listing per whitelist
Jan 31 11:38:11.126: INFO: namespace e2e-tests-container-probe-blpp5 deletion completed in 6.220428411s

• [SLOW TEST:67.944 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:38:11.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-28e55f3a-441e-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:38:11.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-d6cvv" to be "success or failure"
Jan 31 11:38:11.537: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.31468ms
Jan 31 11:38:13.550: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045036176s
Jan 31 11:38:15.696: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191255524s
Jan 31 11:38:17.855: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349891926s
Jan 31 11:38:19.956: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450622438s
Jan 31 11:38:21.978: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473053789s
STEP: Saw pod success
Jan 31 11:38:21.978: INFO: Pod "pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:38:21.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 11:38:22.245: INFO: Waiting for pod pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005 to disappear
Jan 31 11:38:22.251: INFO: Pod pod-configmaps-28e7d4f7-441e-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:38:22.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d6cvv" for this suite.
Jan 31 11:38:28.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:38:28.348: INFO: namespace: e2e-tests-configmap-d6cvv, resource: bindings, ignored listing per whitelist
Jan 31 11:38:28.436: INFO: namespace e2e-tests-configmap-d6cvv deletion completed in 6.179798617s

• [SLOW TEST:17.309 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:38:28.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:38:28.688: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 31 11:38:28.799: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 31 11:38:34.307: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 11:38:38.349: INFO: Creating deployment "test-rolling-update-deployment"
Jan 31 11:38:38.370: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 31 11:38:38.432: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 31 11:38:40.697: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 31 11:38:40.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:38:42.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:38:44.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:38:46.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:38:48.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067528, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067518, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:38:50.753: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 31 11:38:50.775: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-xmb8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmb8p/deployments/test-rolling-update-deployment,UID:38f5de4c-441e-11ea-a994-fa163e34d433,ResourceVersion:20075225,Generation:1,CreationTimestamp:2020-01-31 11:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-31 11:38:38 +0000 UTC 2020-01-31 11:38:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-31 11:38:48 +0000 UTC 2020-01-31 11:38:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 11:38:50.783: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-xmb8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmb8p/replicasets/test-rolling-update-deployment-75db98fb4c,UID:390f5774-441e-11ea-a994-fa163e34d433,ResourceVersion:20075216,Generation:1,CreationTimestamp:2020-01-31 11:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 38f5de4c-441e-11ea-a994-fa163e34d433 0xc001a09287 0xc001a09288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 11:38:50.783: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 31 11:38:50.784: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-xmb8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xmb8p/replicasets/test-rolling-update-controller,UID:3332ff36-441e-11ea-a994-fa163e34d433,ResourceVersion:20075224,Generation:2,CreationTimestamp:2020-01-31 11:38:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 38f5de4c-441e-11ea-a994-fa163e34d433 0xc001a0901f 0xc001a09080}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 11:38:50.803: INFO: Pod "test-rolling-update-deployment-75db98fb4c-qbn85" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-qbn85,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-xmb8p,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xmb8p/pods/test-rolling-update-deployment-75db98fb4c-qbn85,UID:391a563d-441e-11ea-a994-fa163e34d433,ResourceVersion:20075215,Generation:0,CreationTimestamp:2020-01-31 11:38:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 390f5774-441e-11ea-a994-fa163e34d433 0xc001c70277 0xc001c70278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6bngs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6bngs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6bngs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c702e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c70300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:38:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:38:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:38:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:38:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-31 11:38:38 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-31 11:38:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6e8507d004a299cfc9f4acfb0374bc8f9c033491952918bf75e0a3a85a3b97fb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:38:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xmb8p" for this suite.
Jan 31 11:38:58.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:38:58.931: INFO: namespace: e2e-tests-deployment-xmb8p, resource: bindings, ignored listing per whitelist
Jan 31 11:38:59.058: INFO: namespace e2e-tests-deployment-xmb8p deletion completed in 8.245818569s

• [SLOW TEST:30.622 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:38:59.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-g2692
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-g2692 to expose endpoints map[]
Jan 31 11:39:00.270: INFO: Get endpoints failed (96.875106ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 31 11:39:01.292: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-g2692 exposes endpoints map[] (1.118241504s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-g2692
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-g2692 to expose endpoints map[pod1:[80]]
Jan 31 11:39:05.501: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.179732905s elapsed, will retry)
Jan 31 11:39:11.197: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-g2692 exposes endpoints map[pod1:[80]] (9.875870214s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-g2692
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-g2692 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 31 11:39:15.517: INFO: Unexpected endpoints: found map[46a3f2b6-441e-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.272259492s elapsed, will retry)
Jan 31 11:39:20.861: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-g2692 exposes endpoints map[pod1:[80] pod2:[80]] (9.615869958s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-g2692
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-g2692 to expose endpoints map[pod2:[80]]
Jan 31 11:39:22.128: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-g2692 exposes endpoints map[pod2:[80]] (1.257429066s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-g2692
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-g2692 to expose endpoints map[]
Jan 31 11:39:23.542: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-g2692 exposes endpoints map[] (1.399257777s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:39:24.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-g2692" for this suite.
Jan 31 11:39:30.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:39:30.367: INFO: namespace: e2e-tests-services-g2692, resource: bindings, ignored listing per whitelist
Jan 31 11:39:30.387: INFO: namespace e2e-tests-services-g2692 deletion completed in 6.235156077s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:31.328 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:39:30.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-88xkm
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-88xkm
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-88xkm
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-88xkm
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-88xkm
Jan 31 11:39:43.184: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-88xkm, name: ss-0, uid: 5f5cab3e-441e-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 31 11:39:43.185: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-88xkm
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-88xkm
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-88xkm and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 31 11:39:55.037: INFO: Deleting all statefulset in ns e2e-tests-statefulset-88xkm
Jan 31 11:39:55.052: INFO: Scaling statefulset ss to 0
Jan 31 11:40:15.142: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 11:40:15.165: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:40:15.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-88xkm" for this suite.
Jan 31 11:40:23.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:40:23.503: INFO: namespace: e2e-tests-statefulset-88xkm, resource: bindings, ignored listing per whitelist
Jan 31 11:40:23.565: INFO: namespace e2e-tests-statefulset-88xkm deletion completed in 8.348256074s

• [SLOW TEST:53.176 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:40:23.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 31 11:40:23.933: INFO: Waiting up to 5m0s for pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005" in namespace "e2e-tests-containers-xqf4k" to be "success or failure"
Jan 31 11:40:23.987: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 54.063154ms
Jan 31 11:40:26.068: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134858164s
Jan 31 11:40:28.085: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151910748s
Jan 31 11:40:30.235: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302058884s
Jan 31 11:40:32.285: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351355577s
Jan 31 11:40:34.444: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.510358921s
STEP: Saw pod success
Jan 31 11:40:34.444: INFO: Pod "client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:40:34.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 11:40:34.670: INFO: Waiting for pod client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005 to disappear
Jan 31 11:40:34.701: INFO: Pod client-containers-77d8b7c3-441e-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:40:34.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xqf4k" for this suite.
Jan 31 11:40:42.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:40:42.970: INFO: namespace: e2e-tests-containers-xqf4k, resource: bindings, ignored listing per whitelist
Jan 31 11:40:42.992: INFO: namespace e2e-tests-containers-xqf4k deletion completed in 8.20694765s

• [SLOW TEST:19.426 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:40:42.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-8370b9dd-441e-11ea-aae6-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-8370bb90-441e-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8370b9dd-441e-11ea-aae6-0242ac110005
STEP: Updating configmap cm-test-opt-upd-8370bb90-441e-11ea-aae6-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-8370bc5c-441e-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:41:02.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5hpvz" for this suite.
Jan 31 11:41:28.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:41:28.311: INFO: namespace: e2e-tests-projected-5hpvz, resource: bindings, ignored listing per whitelist
Jan 31 11:41:28.350: INFO: namespace e2e-tests-projected-5hpvz deletion completed in 26.27116473s

• [SLOW TEST:45.358 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:41:28.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 11:41:28.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-4nq5t" to be "success or failure"
Jan 31 11:41:28.685: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.482333ms
Jan 31 11:41:31.075: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401577931s
Jan 31 11:41:33.083: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409329429s
Jan 31 11:41:35.920: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.246126809s
Jan 31 11:41:37.946: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.272854739s
Jan 31 11:41:39.987: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.313637971s
STEP: Saw pod success
Jan 31 11:41:39.988: INFO: Pod "downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:41:40.054: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 11:41:40.261: INFO: Waiting for pod downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005 to disappear
Jan 31 11:41:40.274: INFO: Pod downwardapi-volume-9e77890c-441e-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:41:40.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4nq5t" for this suite.
Jan 31 11:41:46.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:41:46.928: INFO: namespace: e2e-tests-downward-api-4nq5t, resource: bindings, ignored listing per whitelist
Jan 31 11:41:46.966: INFO: namespace e2e-tests-downward-api-4nq5t deletion completed in 6.680143266s

• [SLOW TEST:18.614 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:41:46.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 31 11:41:47.313: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:42:10.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-2bwtg" for this suite.
Jan 31 11:42:34.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:42:34.973: INFO: namespace: e2e-tests-init-container-2bwtg, resource: bindings, ignored listing per whitelist
Jan 31 11:42:35.116: INFO: namespace e2e-tests-init-container-2bwtg deletion completed in 24.277481351s

• [SLOW TEST:48.149 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:42:35.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 31 11:42:35.451: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6vwr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-6vwr9/configmaps/e2e-watch-test-resource-version,UID:c6315cc9-441e-11ea-a994-fa163e34d433,ResourceVersion:20075825,Generation:0,CreationTimestamp:2020-01-31 11:42:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 11:42:35.451: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6vwr9,SelfLink:/api/v1/namespaces/e2e-tests-watch-6vwr9/configmaps/e2e-watch-test-resource-version,UID:c6315cc9-441e-11ea-a994-fa163e34d433,ResourceVersion:20075826,Generation:0,CreationTimestamp:2020-01-31 11:42:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:42:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6vwr9" for this suite.
Jan 31 11:42:41.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:42:42.417: INFO: namespace: e2e-tests-watch-6vwr9, resource: bindings, ignored listing per whitelist
Jan 31 11:42:42.431: INFO: namespace e2e-tests-watch-6vwr9 deletion completed in 6.967647794s

• [SLOW TEST:7.315 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:42:42.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0131 11:42:54.776138       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 11:42:54.776: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:42:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-8mnlm" for this suite.
Jan 31 11:43:17.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:43:18.182: INFO: namespace: e2e-tests-gc-8mnlm, resource: bindings, ignored listing per whitelist
Jan 31 11:43:18.418: INFO: namespace e2e-tests-gc-8mnlm deletion completed in 23.637347829s

• [SLOW TEST:35.988 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:43:18.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 11:43:20.716: INFO: Waiting up to 5m0s for pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-bkg2r" to be "success or failure"
Jan 31 11:43:21.263: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 547.031599ms
Jan 31 11:43:23.905: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.18894006s
Jan 31 11:43:25.930: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.213754823s
Jan 31 11:43:27.948: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.231819299s
Jan 31 11:43:31.215: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.499442013s
Jan 31 11:43:33.235: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.519350341s
Jan 31 11:43:35.253: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.537498429s
Jan 31 11:43:37.282: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.565663455s
STEP: Saw pod success
Jan 31 11:43:37.282: INFO: Pod "pod-e13f13d1-441e-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:43:37.287: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e13f13d1-441e-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 11:43:37.564: INFO: Waiting for pod pod-e13f13d1-441e-11ea-aae6-0242ac110005 to disappear
Jan 31 11:43:37.572: INFO: Pod pod-e13f13d1-441e-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:43:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bkg2r" for this suite.
Jan 31 11:43:43.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:43:43.764: INFO: namespace: e2e-tests-emptydir-bkg2r, resource: bindings, ignored listing per whitelist
Jan 31 11:43:43.782: INFO: namespace e2e-tests-emptydir-bkg2r deletion completed in 6.19995291s

• [SLOW TEST:25.359 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:43:43.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ef445803-441e-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:43:44.271: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-787z9" to be "success or failure"
Jan 31 11:43:44.284: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.519314ms
Jan 31 11:43:46.451: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179915837s
Jan 31 11:43:48.471: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199897859s
Jan 31 11:43:50.678: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406468935s
Jan 31 11:43:52.730: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458359538s
Jan 31 11:43:54.759: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488045502s
Jan 31 11:43:56.776: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.504874263s
STEP: Saw pod success
Jan 31 11:43:56.776: INFO: Pod "pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:43:56.783: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 11:43:56.873: INFO: Waiting for pod pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005 to disappear
Jan 31 11:43:56.885: INFO: Pod pod-projected-configmaps-ef45c56c-441e-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:43:56.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-787z9" for this suite.
Jan 31 11:44:03.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:44:03.153: INFO: namespace: e2e-tests-projected-787z9, resource: bindings, ignored listing per whitelist
Jan 31 11:44:03.280: INFO: namespace e2e-tests-projected-787z9 deletion completed in 6.369480551s

• [SLOW TEST:19.498 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:44:03.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:44:03.568: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 11:44:03.663: INFO: Number of nodes with available pods: 0
Jan 31 11:44:03.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:05.777: INFO: Number of nodes with available pods: 0
Jan 31 11:44:05.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:06.693: INFO: Number of nodes with available pods: 0
Jan 31 11:44:06.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:07.728: INFO: Number of nodes with available pods: 0
Jan 31 11:44:07.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:08.711: INFO: Number of nodes with available pods: 0
Jan 31 11:44:08.711: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:09.686: INFO: Number of nodes with available pods: 0
Jan 31 11:44:09.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:10.686: INFO: Number of nodes with available pods: 0
Jan 31 11:44:10.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:11.690: INFO: Number of nodes with available pods: 0
Jan 31 11:44:11.690: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:12.707: INFO: Number of nodes with available pods: 0
Jan 31 11:44:12.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:13.715: INFO: Number of nodes with available pods: 0
Jan 31 11:44:13.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:14.691: INFO: Number of nodes with available pods: 1
Jan 31 11:44:14.691: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 31 11:44:14.779: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:15.951: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:17.661: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:18.911: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:19.932: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:20.916: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:21.915: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:21.915: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:22.911: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:22.912: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:23.914: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:23.914: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:24.916: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:24.916: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:25.920: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:25.920: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:26.931: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:26.931: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:27.926: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:27.926: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:28.939: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:28.939: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:29.918: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:29.918: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:30.910: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:30.911: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:31.918: INFO: Wrong image for pod: daemon-set-jjjm7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 31 11:44:31.918: INFO: Pod daemon-set-jjjm7 is not available
Jan 31 11:44:32.922: INFO: Pod daemon-set-ks264 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 31 11:44:33.070: INFO: Number of nodes with available pods: 0
Jan 31 11:44:33.070: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:34.090: INFO: Number of nodes with available pods: 0
Jan 31 11:44:34.090: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:35.295: INFO: Number of nodes with available pods: 0
Jan 31 11:44:35.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:36.129: INFO: Number of nodes with available pods: 0
Jan 31 11:44:36.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:37.100: INFO: Number of nodes with available pods: 0
Jan 31 11:44:37.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:40.007: INFO: Number of nodes with available pods: 0
Jan 31 11:44:40.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:40.536: INFO: Number of nodes with available pods: 0
Jan 31 11:44:40.536: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:41.087: INFO: Number of nodes with available pods: 0
Jan 31 11:44:41.087: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:42.189: INFO: Number of nodes with available pods: 0
Jan 31 11:44:42.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:43.161: INFO: Number of nodes with available pods: 0
Jan 31 11:44:43.161: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:44:44.095: INFO: Number of nodes with available pods: 1
Jan 31 11:44:44.095: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rcsc9, will wait for the garbage collector to delete the pods
Jan 31 11:44:44.250: INFO: Deleting DaemonSet.extensions daemon-set took: 13.640972ms
Jan 31 11:44:44.451: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.804964ms
Jan 31 11:44:52.933: INFO: Number of nodes with available pods: 0
Jan 31 11:44:52.933: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 11:44:52.937: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rcsc9/daemonsets","resourceVersion":"20076195"},"items":null}

Jan 31 11:44:52.941: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rcsc9/pods","resourceVersion":"20076195"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:44:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rcsc9" for this suite.
Jan 31 11:44:58.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:44:59.165: INFO: namespace: e2e-tests-daemonsets-rcsc9, resource: bindings, ignored listing per whitelist
Jan 31 11:44:59.185: INFO: namespace e2e-tests-daemonsets-rcsc9 deletion completed in 6.229501633s

• [SLOW TEST:55.904 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:44:59.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:44:59.407: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 31 11:45:04.427: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 11:45:08.454: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 31 11:45:10.473: INFO: Creating deployment "test-rollover-deployment"
Jan 31 11:45:10.524: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 31 11:45:12.565: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 31 11:45:12.620: INFO: Ensure that both replica sets have 1 created replica
Jan 31 11:45:12.652: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 31 11:45:12.728: INFO: Updating deployment test-rollover-deployment
Jan 31 11:45:12.729: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 31 11:45:14.764: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 31 11:45:14.781: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 31 11:45:14.793: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:14.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067913, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:17.695: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:17.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067913, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:18.822: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:18.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067913, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:21.335: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:21.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067913, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:22.812: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:22.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067913, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:24.812: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:24.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067923, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:26.816: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:26.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067923, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:28.810: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:28.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067923, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:30.834: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:30.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067923, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:32.820: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 11:45:32.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067923, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716067910, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:45:34.857: INFO: 
Jan 31 11:45:34.857: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 31 11:45:34.891: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-rjjhx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rjjhx/deployments/test-rollover-deployment,UID:22aed9d8-441f-11ea-a994-fa163e34d433,ResourceVersion:20076331,Generation:2,CreationTimestamp:2020-01-31 11:45:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-31 11:45:10 +0000 UTC 2020-01-31 11:45:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-31 11:45:33 +0000 UTC 2020-01-31 11:45:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 11:45:34.910: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-rjjhx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rjjhx/replicasets/test-rollover-deployment-5b8479fdb6,UID:2407b6f3-441f-11ea-a994-fa163e34d433,ResourceVersion:20076322,Generation:2,CreationTimestamp:2020-01-31 11:45:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22aed9d8-441f-11ea-a994-fa163e34d433 0xc00149f857 0xc00149f858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 11:45:34.910: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 31 11:45:34.910: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-rjjhx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rjjhx/replicasets/test-rollover-controller,UID:1c13e20b-441f-11ea-a994-fa163e34d433,ResourceVersion:20076330,Generation:2,CreationTimestamp:2020-01-31 11:44:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22aed9d8-441f-11ea-a994-fa163e34d433 0xc00149f3a7 0xc00149f3a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 11:45:34.911: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-rjjhx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rjjhx/replicasets/test-rollover-deployment-58494b7559,UID:22bdadbe-441f-11ea-a994-fa163e34d433,ResourceVersion:20076291,Generation:2,CreationTimestamp:2020-01-31 11:45:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22aed9d8-441f-11ea-a994-fa163e34d433 0xc00149f657 0xc00149f658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 11:45:34.921: INFO: Pod "test-rollover-deployment-5b8479fdb6-njmjw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-njmjw,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-rjjhx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rjjhx/pods/test-rollover-deployment-5b8479fdb6-njmjw,UID:245639db-441f-11ea-a994-fa163e34d433,ResourceVersion:20076307,Generation:0,CreationTimestamp:2020-01-31 11:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 2407b6f3-441f-11ea-a994-fa163e34d433 0xc0015d31e7 0xc0015d31e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gn2b5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gn2b5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gn2b5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015d32a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015d32c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:45:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:45:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:45:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-31 11:45:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-31 11:45:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://22e81b261527d894537a52f9b229c890a7d6f4c19575e7cab4d5e82646633d49}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:45:34.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rjjhx" for this suite.
Jan 31 11:45:43.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:45:43.567: INFO: namespace: e2e-tests-deployment-rjjhx, resource: bindings, ignored listing per whitelist
Jan 31 11:45:43.717: INFO: namespace e2e-tests-deployment-rjjhx deletion completed in 8.787270209s

• [SLOW TEST:44.532 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:45:43.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 31 11:45:44.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076380,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 11:45:44.555: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076380,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 31 11:45:54.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076393,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 31 11:45:54.589: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076393,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 31 11:46:04.678: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076405,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 11:46:04.678: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076405,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 31 11:46:14.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076417,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 11:46:14.702: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-a,UID:36e85cd6-441f-11ea-a994-fa163e34d433,ResourceVersion:20076417,Generation:0,CreationTimestamp:2020-01-31 11:45:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 31 11:46:24.768: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-b,UID:4ef04f97-441f-11ea-a994-fa163e34d433,ResourceVersion:20076430,Generation:0,CreationTimestamp:2020-01-31 11:46:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 11:46:24.769: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-b,UID:4ef04f97-441f-11ea-a994-fa163e34d433,ResourceVersion:20076430,Generation:0,CreationTimestamp:2020-01-31 11:46:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 31 11:46:34.804: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-b,UID:4ef04f97-441f-11ea-a994-fa163e34d433,ResourceVersion:20076443,Generation:0,CreationTimestamp:2020-01-31 11:46:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 11:46:34.804: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-4wblg,SelfLink:/api/v1/namespaces/e2e-tests-watch-4wblg/configmaps/e2e-watch-test-configmap-b,UID:4ef04f97-441f-11ea-a994-fa163e34d433,ResourceVersion:20076443,Generation:0,CreationTimestamp:2020-01-31 11:46:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:46:44.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4wblg" for this suite.
Jan 31 11:46:50.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:46:50.972: INFO: namespace: e2e-tests-watch-4wblg, resource: bindings, ignored listing per whitelist
Jan 31 11:46:51.072: INFO: namespace e2e-tests-watch-4wblg deletion completed in 6.232074465s

• [SLOW TEST:67.355 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:46:51.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-5ec3f655-441f-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:46:51.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-xmqb7" to be "success or failure"
Jan 31 11:46:51.393: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 97.05696ms
Jan 31 11:46:53.483: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187028158s
Jan 31 11:46:55.496: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199661865s
Jan 31 11:46:57.841: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.54503335s
Jan 31 11:46:59.898: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60190762s
Jan 31 11:47:01.922: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.625259788s
STEP: Saw pod success
Jan 31 11:47:01.922: INFO: Pod "pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:47:01.933: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 11:47:02.470: INFO: Waiting for pod pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005 to disappear
Jan 31 11:47:02.485: INFO: Pod pod-configmaps-5ec4f182-441f-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:47:02.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xmqb7" for this suite.
Jan 31 11:47:08.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:47:08.728: INFO: namespace: e2e-tests-configmap-xmqb7, resource: bindings, ignored listing per whitelist
Jan 31 11:47:08.761: INFO: namespace e2e-tests-configmap-xmqb7 deletion completed in 6.261809586s

• [SLOW TEST:17.689 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:47:08.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 31 11:47:09.517: INFO: Waiting up to 5m0s for pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9" in namespace "e2e-tests-svcaccounts-tz8d6" to be "success or failure"
Jan 31 11:47:09.591: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 73.52463ms
Jan 31 11:47:11.629: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111823133s
Jan 31 11:47:13.639: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121743738s
Jan 31 11:47:16.038: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.520562994s
Jan 31 11:47:18.084: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566749151s
Jan 31 11:47:20.119: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.602254716s
Jan 31 11:47:22.653: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.136452056s
Jan 31 11:47:24.697: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.180272427s
Jan 31 11:47:27.746: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.229239107s
STEP: Saw pod success
Jan 31 11:47:27.746: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9" satisfied condition "success or failure"
Jan 31 11:47:28.063: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9 container token-test: 
STEP: delete the pod
Jan 31 11:47:28.296: INFO: Waiting for pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9 to disappear
Jan 31 11:47:28.319: INFO: Pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-2h4c9 no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 31 11:47:28.336: INFO: Waiting up to 5m0s for pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf" in namespace "e2e-tests-svcaccounts-tz8d6" to be "success or failure"
Jan 31 11:47:28.451: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 114.663559ms
Jan 31 11:47:30.470: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13376348s
Jan 31 11:47:32.525: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188695729s
Jan 31 11:47:34.822: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.485397057s
Jan 31 11:47:36.858: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521454037s
Jan 31 11:47:38.874: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.537851264s
Jan 31 11:47:41.154: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.817677329s
Jan 31 11:47:43.166: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.829618454s
Jan 31 11:47:45.181: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.844595741s
STEP: Saw pod success
Jan 31 11:47:45.181: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf" satisfied condition "success or failure"
Jan 31 11:47:45.186: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf container root-ca-test: 
STEP: delete the pod
Jan 31 11:47:45.824: INFO: Waiting for pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf to disappear
Jan 31 11:47:46.036: INFO: Pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-55wwf no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 31 11:47:46.099: INFO: Waiting up to 5m0s for pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv" in namespace "e2e-tests-svcaccounts-tz8d6" to be "success or failure"
Jan 31 11:47:46.118: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.206121ms
Jan 31 11:47:48.184: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084417353s
Jan 31 11:47:50.213: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112854422s
Jan 31 11:47:52.496: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396601162s
Jan 31 11:47:54.530: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429889841s
Jan 31 11:47:57.307: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.207506857s
Jan 31 11:47:59.332: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.232289562s
Jan 31 11:48:01.344: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.244392847s
Jan 31 11:48:03.361: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.261164863s
Jan 31 11:48:05.420: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.319949804s
STEP: Saw pod success
Jan 31 11:48:05.420: INFO: Pod "pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv" satisfied condition "success or failure"
Jan 31 11:48:05.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv container namespace-test: 
STEP: delete the pod
Jan 31 11:48:05.648: INFO: Waiting for pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv to disappear
Jan 31 11:48:05.668: INFO: Pod pod-service-account-699d11ba-441f-11ea-aae6-0242ac110005-zcsgv no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:48:05.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-tz8d6" for this suite.
Jan 31 11:48:13.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:48:13.990: INFO: namespace: e2e-tests-svcaccounts-tz8d6, resource: bindings, ignored listing per whitelist
Jan 31 11:48:14.012: INFO: namespace e2e-tests-svcaccounts-tz8d6 deletion completed in 8.332025117s

• [SLOW TEST:65.251 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:48:14.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 31 11:48:14.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 11:48:14.291: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 11:48:14.295: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 31 11:48:14.308: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:48:14.308: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:48:14.308: INFO: 	Container coredns ready: true, restart count 0
Jan 31 11:48:14.308: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 31 11:48:14.308: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 11:48:14.308: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:48:14.308: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 31 11:48:14.308: INFO: 	Container weave ready: true, restart count 0
Jan 31 11:48:14.308: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 11:48:14.308: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:48:14.308: INFO: 	Container coredns ready: true, restart count 0
Jan 31 11:48:14.308: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:48:14.308: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-965b92cf-441f-11ea-aae6-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-965b92cf-441f-11ea-aae6-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-965b92cf-441f-11ea-aae6-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:48:36.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-d692x" for this suite.
Jan 31 11:48:51.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:48:51.058: INFO: namespace: e2e-tests-sched-pred-d692x, resource: bindings, ignored listing per whitelist
Jan 31 11:48:51.172: INFO: namespace e2e-tests-sched-pred-d692x deletion completed in 14.266496365s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.158 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:48:51.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:48:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-6mnl2" for this suite.
Jan 31 11:49:03.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:49:04.237: INFO: namespace: e2e-tests-namespaces-6mnl2, resource: bindings, ignored listing per whitelist
Jan 31 11:49:04.243: INFO: namespace e2e-tests-namespaces-6mnl2 deletion completed in 6.443297995s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rwxqc" for this suite.
Jan 31 11:49:04.246: INFO: Namespace e2e-tests-nsdeletetest-rwxqc was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-jq42s" for this suite.
Jan 31 11:49:10.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:49:10.406: INFO: namespace: e2e-tests-nsdeletetest-jq42s, resource: bindings, ignored listing per whitelist
Jan 31 11:49:10.527: INFO: namespace e2e-tests-nsdeletetest-jq42s deletion completed in 6.280908926s

• [SLOW TEST:19.355 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:49:10.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 31 11:49:20.813: INFO: Pod pod-hostip-b1db7108-441f-11ea-aae6-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:49:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-s5vw5" for this suite.
Jan 31 11:49:44.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:49:44.962: INFO: namespace: e2e-tests-pods-s5vw5, resource: bindings, ignored listing per whitelist
Jan 31 11:49:44.972: INFO: namespace e2e-tests-pods-s5vw5 deletion completed in 24.151632031s

• [SLOW TEST:34.443 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:49:44.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:49:55.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ppjh2" for this suite.
Jan 31 11:50:39.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:50:39.614: INFO: namespace: e2e-tests-kubelet-test-ppjh2, resource: bindings, ignored listing per whitelist
Jan 31 11:50:39.646: INFO: namespace e2e-tests-kubelet-test-ppjh2 deletion completed in 44.310975441s

• [SLOW TEST:54.674 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:50:39.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-e70ee276-441f-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:50:39.993: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-bzvcw" to be "success or failure"
Jan 31 11:50:40.079: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.105849ms
Jan 31 11:50:42.107: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11384133s
Jan 31 11:50:44.117: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123629569s
Jan 31 11:50:46.844: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.850265079s
Jan 31 11:50:48.881: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88710661s
Jan 31 11:50:50.895: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.901445632s
STEP: Saw pod success
Jan 31 11:50:50.895: INFO: Pod "pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:50:50.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 11:50:51.124: INFO: Waiting for pod pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005 to disappear
Jan 31 11:50:51.138: INFO: Pod pod-projected-secrets-e7104f3d-441f-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:50:51.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bzvcw" for this suite.
Jan 31 11:50:57.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:50:57.360: INFO: namespace: e2e-tests-projected-bzvcw, resource: bindings, ignored listing per whitelist
Jan 31 11:50:57.510: INFO: namespace e2e-tests-projected-bzvcw deletion completed in 6.311224034s

• [SLOW TEST:17.864 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:50:57.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 31 11:50:58.176: INFO: created pod pod-service-account-defaultsa
Jan 31 11:50:58.176: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 31 11:50:58.189: INFO: created pod pod-service-account-mountsa
Jan 31 11:50:58.189: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 31 11:50:58.269: INFO: created pod pod-service-account-nomountsa
Jan 31 11:50:58.269: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 31 11:50:58.360: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 31 11:50:58.361: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 31 11:50:58.411: INFO: created pod pod-service-account-mountsa-mountspec
Jan 31 11:50:58.411: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 31 11:50:58.444: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 31 11:50:58.445: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 31 11:50:58.577: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 31 11:50:58.578: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 31 11:50:58.592: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 31 11:50:58.592: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 31 11:50:58.614: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 31 11:50:58.615: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:50:58.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-sjc6v" for this suite.
Jan 31 11:51:27.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:51:27.244: INFO: namespace: e2e-tests-svcaccounts-sjc6v, resource: bindings, ignored listing per whitelist
Jan 31 11:51:27.380: INFO: namespace e2e-tests-svcaccounts-sjc6v deletion completed in 28.747226735s

• [SLOW TEST:29.869 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:51:27.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0131 11:52:14.289193       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 11:52:14.289: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:52:14.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7rlqg" for this suite.
Jan 31 11:52:26.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:52:26.736: INFO: namespace: e2e-tests-gc-7rlqg, resource: bindings, ignored listing per whitelist
Jan 31 11:52:26.745: INFO: namespace e2e-tests-gc-7rlqg deletion completed in 12.440037094s

• [SLOW TEST:59.365 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:52:26.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:52:31.553: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:52:33.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-4f4dz" for this suite.
Jan 31 11:52:40.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:52:40.779: INFO: namespace: e2e-tests-custom-resource-definition-4f4dz, resource: bindings, ignored listing per whitelist
Jan 31 11:52:40.830: INFO: namespace e2e-tests-custom-resource-definition-4f4dz deletion completed in 6.531697138s

• [SLOW TEST:14.083 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:52:40.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dcjtm
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 11:52:41.026: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 11:53:13.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dcjtm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 11:53:13.313: INFO: >>> kubeConfig: /root/.kube/config
I0131 11:53:13.373569       8 log.go:172] (0xc00048cbb0) (0xc001a0a140) Create stream
I0131 11:53:13.373666       8 log.go:172] (0xc00048cbb0) (0xc001a0a140) Stream added, broadcasting: 1
I0131 11:53:13.379782       8 log.go:172] (0xc00048cbb0) Reply frame received for 1
I0131 11:53:13.379821       8 log.go:172] (0xc00048cbb0) (0xc002503e00) Create stream
I0131 11:53:13.379833       8 log.go:172] (0xc00048cbb0) (0xc002503e00) Stream added, broadcasting: 3
I0131 11:53:13.380977       8 log.go:172] (0xc00048cbb0) Reply frame received for 3
I0131 11:53:13.381034       8 log.go:172] (0xc00048cbb0) (0xc002627ae0) Create stream
I0131 11:53:13.381042       8 log.go:172] (0xc00048cbb0) (0xc002627ae0) Stream added, broadcasting: 5
I0131 11:53:13.382055       8 log.go:172] (0xc00048cbb0) Reply frame received for 5
I0131 11:53:13.624826       8 log.go:172] (0xc00048cbb0) Data frame received for 3
I0131 11:53:13.625042       8 log.go:172] (0xc002503e00) (3) Data frame handling
I0131 11:53:13.625099       8 log.go:172] (0xc002503e00) (3) Data frame sent
I0131 11:53:13.782047       8 log.go:172] (0xc00048cbb0) Data frame received for 1
I0131 11:53:13.782394       8 log.go:172] (0xc00048cbb0) (0xc002503e00) Stream removed, broadcasting: 3
I0131 11:53:13.782585       8 log.go:172] (0xc001a0a140) (1) Data frame handling
I0131 11:53:13.782698       8 log.go:172] (0xc001a0a140) (1) Data frame sent
I0131 11:53:13.782760       8 log.go:172] (0xc00048cbb0) (0xc002627ae0) Stream removed, broadcasting: 5
I0131 11:53:13.782891       8 log.go:172] (0xc00048cbb0) (0xc001a0a140) Stream removed, broadcasting: 1
I0131 11:53:13.782943       8 log.go:172] (0xc00048cbb0) Go away received
I0131 11:53:13.783791       8 log.go:172] (0xc00048cbb0) (0xc001a0a140) Stream removed, broadcasting: 1
I0131 11:53:13.783838       8 log.go:172] (0xc00048cbb0) (0xc002503e00) Stream removed, broadcasting: 3
I0131 11:53:13.783871       8 log.go:172] (0xc00048cbb0) (0xc002627ae0) Stream removed, broadcasting: 5
Jan 31 11:53:13.784: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:53:13.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dcjtm" for this suite.
Jan 31 11:53:39.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:53:40.067: INFO: namespace: e2e-tests-pod-network-test-dcjtm, resource: bindings, ignored listing per whitelist
Jan 31 11:53:40.072: INFO: namespace e2e-tests-pod-network-test-dcjtm deletion completed in 26.269962831s

• [SLOW TEST:59.242 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:53:40.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-p8lvb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p8lvb to expose endpoints map[]
Jan 31 11:53:40.359: INFO: Get endpoints failed (17.720076ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 31 11:53:41.379: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p8lvb exposes endpoints map[] (1.037067546s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-p8lvb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p8lvb to expose endpoints map[pod1:[100]]
Jan 31 11:53:45.726: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.29519714s elapsed, will retry)
Jan 31 11:53:51.354: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p8lvb exposes endpoints map[pod1:[100]] (9.923014617s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-p8lvb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p8lvb to expose endpoints map[pod1:[100] pod2:[101]]
Jan 31 11:53:56.080: INFO: Unexpected endpoints: found map[53369152-4420-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.680216981s elapsed, will retry)
Jan 31 11:54:02.501: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p8lvb exposes endpoints map[pod1:[100] pod2:[101]] (11.100979337s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-p8lvb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p8lvb to expose endpoints map[pod2:[101]]
Jan 31 11:54:03.736: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p8lvb exposes endpoints map[pod2:[101]] (1.226158173s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-p8lvb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p8lvb to expose endpoints map[]
Jan 31 11:54:03.778: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p8lvb exposes endpoints map[] (24.640961ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:54:03.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-p8lvb" for this suite.
Jan 31 11:54:28.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:54:28.222: INFO: namespace: e2e-tests-services-p8lvb, resource: bindings, ignored listing per whitelist
Jan 31 11:54:28.248: INFO: namespace e2e-tests-services-p8lvb deletion completed in 24.243157956s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.174 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:54:28.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-nwm27 in namespace e2e-tests-proxy-n2mrp
I0131 11:54:28.863656       8 runners.go:184] Created replication controller with name: proxy-service-nwm27, namespace: e2e-tests-proxy-n2mrp, replica count: 1
I0131 11:54:29.914860       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:30.915811       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:31.916406       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:32.917089       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:33.918215       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:34.918763       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:35.919139       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:36.919487       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:37.919877       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 11:54:38.920236       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 11:54:39.920725       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 11:54:40.921295       8 runners.go:184] proxy-service-nwm27 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 11:54:40.939: INFO: setup took 12.268951393s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 31 11:54:40.982: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/pods/http:proxy-service-nwm27-fwbrm:160/proxy/: foo (200; 43.04348ms)
Jan 31 11:54:40.982: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/pods/proxy-service-nwm27-fwbrm:160/proxy/: foo (200; 42.618291ms)
Jan 31 11:54:40.986: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/pods/proxy-service-nwm27-fwbrm:162/proxy/: bar (200; 46.041551ms)
Jan 31 11:54:40.988: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/pods/http:proxy-service-nwm27-fwbrm:162/proxy/: bar (200; 48.829177ms)
Jan 31 11:54:40.989: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/services/proxy-service-nwm27:portname1/proxy/: foo (200; 49.294663ms)
Jan 31 11:54:40.992: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-n2mrp/pods/proxy-service-nwm27-fwbrm:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 31 11:54:55.526: INFO: PodSpec: initContainers in spec.initContainers
Jan 31 11:56:09.191: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7f662bbb-4420-11ea-aae6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-mjjsm", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-mjjsm/pods/pod-init-7f662bbb-4420-11ea-aae6-0242ac110005", UID:"7f6b0d35-4420-11ea-a994-fa163e34d433", ResourceVersion:"20077849", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716068495, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"526647648"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-trtgw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002293f40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-trtgw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-trtgw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-trtgw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002251138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ec3e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022511b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022511d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022511d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022511dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068495, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068495, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068495, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068495, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001d1b5c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0014fdab0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000616000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://15d5ad8e8b09f51a7dd1d68fa08ab4be50f7c16f50dd9d1f93e224a1d55a8811"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d1b600), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d1b5e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:56:09.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-mjjsm" for this suite.
Jan 31 11:56:33.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:56:33.488: INFO: namespace: e2e-tests-init-container-mjjsm, resource: bindings, ignored listing per whitelist
Jan 31 11:56:33.663: INFO: namespace e2e-tests-init-container-mjjsm deletion completed in 24.342938404s

• [SLOW TEST:98.634 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:56:33.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 31 11:56:34.005: INFO: Waiting up to 5m0s for pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005" in namespace "e2e-tests-var-expansion-xjxgf" to be "success or failure"
Jan 31 11:56:34.028: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.117874ms
Jan 31 11:56:36.373: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.367315046s
Jan 31 11:56:38.392: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386097737s
Jan 31 11:56:40.889: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.883938421s
Jan 31 11:56:42.950: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944368952s
Jan 31 11:56:44.979: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.973410373s
STEP: Saw pod success
Jan 31 11:56:44.979: INFO: Pod "var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:56:44.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 11:56:45.247: INFO: Waiting for pod var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005 to disappear
Jan 31 11:56:45.275: INFO: Pod var-expansion-ba0834bd-4420-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:56:45.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-xjxgf" for this suite.
Jan 31 11:56:51.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:56:51.668: INFO: namespace: e2e-tests-var-expansion-xjxgf, resource: bindings, ignored listing per whitelist
Jan 31 11:56:51.686: INFO: namespace e2e-tests-var-expansion-xjxgf deletion completed in 6.40444264s

• [SLOW TEST:18.021 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:56:51.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c4b8ff43-4420-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:56:51.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-b4k5z" to be "success or failure"
Jan 31 11:56:51.946: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090336ms
Jan 31 11:56:54.187: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246750957s
Jan 31 11:56:56.204: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263461607s
Jan 31 11:56:58.220: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280272333s
Jan 31 11:57:00.583: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.642847652s
Jan 31 11:57:02.625: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.684483684s
STEP: Saw pod success
Jan 31 11:57:02.625: INFO: Pod "pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:57:02.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 11:57:03.063: INFO: Waiting for pod pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005 to disappear
Jan 31 11:57:03.078: INFO: Pod pod-configmaps-c4c5dbad-4420-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:57:03.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-b4k5z" for this suite.
Jan 31 11:57:09.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:57:09.204: INFO: namespace: e2e-tests-configmap-b4k5z, resource: bindings, ignored listing per whitelist
Jan 31 11:57:09.297: INFO: namespace e2e-tests-configmap-b4k5z deletion completed in 6.208719932s

• [SLOW TEST:17.611 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:57:09.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-cf535711-4420-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:57:09.658: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-5tqfp" to be "success or failure"
Jan 31 11:57:09.867: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 208.898297ms
Jan 31 11:57:11.895: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237068901s
Jan 31 11:57:13.922: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2646262s
Jan 31 11:57:16.233: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574762367s
Jan 31 11:57:18.247: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589552387s
Jan 31 11:57:20.260: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602102676s
STEP: Saw pod success
Jan 31 11:57:20.260: INFO: Pod "pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:57:20.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 11:57:20.711: INFO: Waiting for pod pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005 to disappear
Jan 31 11:57:20.729: INFO: Pod pod-configmaps-cf55057a-4420-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:57:20.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5tqfp" for this suite.
Jan 31 11:57:26.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:57:26.826: INFO: namespace: e2e-tests-configmap-5tqfp, resource: bindings, ignored listing per whitelist
Jan 31 11:57:27.024: INFO: namespace e2e-tests-configmap-5tqfp deletion completed in 6.279377143s

• [SLOW TEST:17.727 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:57:27.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:57:27.369: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 31 11:57:27.397: INFO: Number of nodes with available pods: 0
Jan 31 11:57:27.398: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 31 11:57:27.541: INFO: Number of nodes with available pods: 0
Jan 31 11:57:27.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:29.265: INFO: Number of nodes with available pods: 0
Jan 31 11:57:29.265: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:29.641: INFO: Number of nodes with available pods: 0
Jan 31 11:57:29.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:30.625: INFO: Number of nodes with available pods: 0
Jan 31 11:57:30.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:31.580: INFO: Number of nodes with available pods: 0
Jan 31 11:57:31.580: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:33.148: INFO: Number of nodes with available pods: 0
Jan 31 11:57:33.148: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:33.807: INFO: Number of nodes with available pods: 0
Jan 31 11:57:33.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:34.637: INFO: Number of nodes with available pods: 0
Jan 31 11:57:34.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:35.561: INFO: Number of nodes with available pods: 0
Jan 31 11:57:35.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:36.572: INFO: Number of nodes with available pods: 1
Jan 31 11:57:36.573: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 31 11:57:36.771: INFO: Number of nodes with available pods: 1
Jan 31 11:57:36.771: INFO: Number of running nodes: 0, number of available pods: 1
Jan 31 11:57:37.793: INFO: Number of nodes with available pods: 0
Jan 31 11:57:37.793: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 31 11:57:37.857: INFO: Number of nodes with available pods: 0
Jan 31 11:57:37.857: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:38.882: INFO: Number of nodes with available pods: 0
Jan 31 11:57:38.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:39.887: INFO: Number of nodes with available pods: 0
Jan 31 11:57:39.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:41.645: INFO: Number of nodes with available pods: 0
Jan 31 11:57:41.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:41.894: INFO: Number of nodes with available pods: 0
Jan 31 11:57:41.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:42.876: INFO: Number of nodes with available pods: 0
Jan 31 11:57:42.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:43.893: INFO: Number of nodes with available pods: 0
Jan 31 11:57:43.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:44.878: INFO: Number of nodes with available pods: 0
Jan 31 11:57:44.878: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:45.881: INFO: Number of nodes with available pods: 0
Jan 31 11:57:45.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:46.874: INFO: Number of nodes with available pods: 0
Jan 31 11:57:46.874: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:47.899: INFO: Number of nodes with available pods: 0
Jan 31 11:57:47.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:48.883: INFO: Number of nodes with available pods: 0
Jan 31 11:57:48.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:49.870: INFO: Number of nodes with available pods: 0
Jan 31 11:57:49.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:50.876: INFO: Number of nodes with available pods: 0
Jan 31 11:57:50.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:51.886: INFO: Number of nodes with available pods: 0
Jan 31 11:57:51.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:52.932: INFO: Number of nodes with available pods: 0
Jan 31 11:57:52.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:53.880: INFO: Number of nodes with available pods: 0
Jan 31 11:57:53.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:55.070: INFO: Number of nodes with available pods: 0
Jan 31 11:57:55.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:55.881: INFO: Number of nodes with available pods: 0
Jan 31 11:57:55.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:56.874: INFO: Number of nodes with available pods: 0
Jan 31 11:57:56.874: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:57.881: INFO: Number of nodes with available pods: 0
Jan 31 11:57:57.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:57:59.276: INFO: Number of nodes with available pods: 0
Jan 31 11:57:59.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:58:00.037: INFO: Number of nodes with available pods: 0
Jan 31 11:58:00.037: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:58:00.878: INFO: Number of nodes with available pods: 0
Jan 31 11:58:00.878: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:58:01.873: INFO: Number of nodes with available pods: 0
Jan 31 11:58:01.873: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 11:58:02.916: INFO: Number of nodes with available pods: 1
Jan 31 11:58:02.916: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lppqp, will wait for the garbage collector to delete the pods
Jan 31 11:58:03.015: INFO: Deleting DaemonSet.extensions daemon-set took: 32.078057ms
Jan 31 11:58:03.216: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.152399ms
Jan 31 11:58:12.833: INFO: Number of nodes with available pods: 0
Jan 31 11:58:12.833: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 11:58:12.838: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lppqp/daemonsets","resourceVersion":"20078131"},"items":null}

Jan 31 11:58:12.841: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lppqp/pods","resourceVersion":"20078131"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:58:12.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lppqp" for this suite.
Jan 31 11:58:19.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:58:19.186: INFO: namespace: e2e-tests-daemonsets-lppqp, resource: bindings, ignored listing per whitelist
Jan 31 11:58:19.197: INFO: namespace e2e-tests-daemonsets-lppqp deletion completed in 6.189104287s

• [SLOW TEST:52.173 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:58:19.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 31 11:58:19.361: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 11:58:19.386: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 11:58:19.399: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 31 11:58:19.413: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:58:19.413: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:58:19.413: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:58:19.413: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:58:19.413: INFO: 	Container coredns ready: true, restart count 0
Jan 31 11:58:19.413: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 31 11:58:19.413: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 11:58:19.413: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 31 11:58:19.413: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 31 11:58:19.413: INFO: 	Container weave ready: true, restart count 0
Jan 31 11:58:19.413: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 11:58:19.413: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 31 11:58:19.413: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 31 11:58:19.510: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f8fb9a78-4420-11ea-aae6-0242ac110005.15eef6d1a511919f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r4xwx/filler-pod-f8fb9a78-4420-11ea-aae6-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f8fb9a78-4420-11ea-aae6-0242ac110005.15eef6d29fafb73b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f8fb9a78-4420-11ea-aae6-0242ac110005.15eef6d34436075b], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f8fb9a78-4420-11ea-aae6-0242ac110005.15eef6d36e224aea], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eef6d3fc20a517], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:58:30.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-r4xwx" for this suite.
Jan 31 11:58:36.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:58:37.025: INFO: namespace: e2e-tests-sched-pred-r4xwx, resource: bindings, ignored listing per whitelist
Jan 31 11:58:37.066: INFO: namespace e2e-tests-sched-pred-r4xwx deletion completed in 6.302765866s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:17.868 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:58:37.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-04078aa6-4421-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:58:38.087: INFO: Waiting up to 5m0s for pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-bdcr5" to be "success or failure"
Jan 31 11:58:38.504: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 415.962693ms
Jan 31 11:58:40.528: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439971027s
Jan 31 11:58:42.551: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463800188s
Jan 31 11:58:45.079: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.991257419s
Jan 31 11:58:47.091: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.003574798s
Jan 31 11:58:49.109: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.021572696s
STEP: Saw pod success
Jan 31 11:58:49.109: INFO: Pod "pod-secrets-040946a7-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:58:49.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-040946a7-4421-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 11:58:49.207: INFO: Waiting for pod pod-secrets-040946a7-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 11:58:49.263: INFO: Pod pod-secrets-040946a7-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:58:49.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bdcr5" for this suite.
Jan 31 11:58:55.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:58:55.550: INFO: namespace: e2e-tests-secrets-bdcr5, resource: bindings, ignored listing per whitelist
Jan 31 11:58:55.558: INFO: namespace e2e-tests-secrets-bdcr5 deletion completed in 6.280575943s

• [SLOW TEST:18.492 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:58:55.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 11:58:55.987: INFO: Creating deployment "test-recreate-deployment"
Jan 31 11:58:55.999: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 31 11:58:56.031: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 31 11:58:58.050: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 31 11:58:58.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:59:00.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:59:02.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:59:04.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716068736, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 11:59:06.066: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 31 11:59:06.088: INFO: Updating deployment test-recreate-deployment
Jan 31 11:59:06.088: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 31 11:59:06.710: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-qtvzs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qtvzs/deployments/test-recreate-deployment,UID:0eb9bbc2-4421-11ea-a994-fa163e34d433,ResourceVersion:20078316,Generation:2,CreationTimestamp:2020-01-31 11:58:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-31 11:59:06 +0000 UTC 2020-01-31 11:59:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-31 11:59:06 +0000 UTC 2020-01-31 11:58:56 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 31 11:59:06.887: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-qtvzs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qtvzs/replicasets/test-recreate-deployment-589c4bfd,UID:14ec5018-4421-11ea-a994-fa163e34d433,ResourceVersion:20078315,Generation:1,CreationTimestamp:2020-01-31 11:59:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0eb9bbc2-4421-11ea-a994-fa163e34d433 0xc00211736f 0xc002117380}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 11:59:06.888: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 31 11:59:06.888: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-qtvzs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qtvzs/replicasets/test-recreate-deployment-5bf7f65dc,UID:0ec03a3b-4421-11ea-a994-fa163e34d433,ResourceVersion:20078305,Generation:2,CreationTimestamp:2020-01-31 11:58:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0eb9bbc2-4421-11ea-a994-fa163e34d433 0xc0021174c0 0xc0021174c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 11:59:06.910: INFO: Pod "test-recreate-deployment-589c4bfd-pgkss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-pgkss,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-qtvzs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qtvzs/pods/test-recreate-deployment-589c4bfd-pgkss,UID:14edfcf4-4421-11ea-a994-fa163e34d433,ResourceVersion:20078317,Generation:0,CreationTimestamp:2020-01-31 11:59:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 14ec5018-4421-11ea-a994-fa163e34d433 0xc00225030f 0xc002250320}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bv9pk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bv9pk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-bv9pk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002250620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002250640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:59:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:59:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:59:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 11:59:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 11:59:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:59:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qtvzs" for this suite.
Jan 31 11:59:19.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:59:19.173: INFO: namespace: e2e-tests-deployment-qtvzs, resource: bindings, ignored listing per whitelist
Jan 31 11:59:19.189: INFO: namespace e2e-tests-deployment-qtvzs deletion completed in 12.254898453s

• [SLOW TEST:23.630 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:59:19.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1ca86fd7-4421-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 11:59:19.389: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-pd7hx" to be "success or failure"
Jan 31 11:59:19.395: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.883455ms
Jan 31 11:59:21.410: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020721045s
Jan 31 11:59:23.514: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12465327s
Jan 31 11:59:25.845: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455880915s
Jan 31 11:59:27.867: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.478511679s
Jan 31 11:59:30.022: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633330726s
STEP: Saw pod success
Jan 31 11:59:30.023: INFO: Pod "pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:59:30.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 11:59:30.209: INFO: Waiting for pod pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 11:59:30.217: INFO: Pod pod-configmaps-1ca9899b-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:59:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pd7hx" for this suite.
Jan 31 11:59:36.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:59:36.305: INFO: namespace: e2e-tests-configmap-pd7hx, resource: bindings, ignored listing per whitelist
Jan 31 11:59:36.493: INFO: namespace e2e-tests-configmap-pd7hx deletion completed in 6.265714786s

• [SLOW TEST:17.304 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:59:36.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-27127fb1-4421-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 11:59:37.006: INFO: Waiting up to 5m0s for pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-gtrzc" to be "success or failure"
Jan 31 11:59:37.018: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.572301ms
Jan 31 11:59:39.401: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394907005s
Jan 31 11:59:41.428: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42231005s
Jan 31 11:59:43.704: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697603736s
Jan 31 11:59:45.720: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.71391406s
Jan 31 11:59:47.747: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.741120341s
STEP: Saw pod success
Jan 31 11:59:47.748: INFO: Pod "pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 11:59:47.762: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 11:59:47.863: INFO: Waiting for pod pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 11:59:48.008: INFO: Pod pod-secrets-27282d5b-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 11:59:48.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gtrzc" for this suite.
Jan 31 11:59:54.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 11:59:54.216: INFO: namespace: e2e-tests-secrets-gtrzc, resource: bindings, ignored listing per whitelist
Jan 31 11:59:54.274: INFO: namespace e2e-tests-secrets-gtrzc deletion completed in 6.2554121s

• [SLOW TEST:17.781 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 11:59:54.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 11:59:54.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-nm2nh" to be "success or failure"
Jan 31 11:59:55.045: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 467.439159ms
Jan 31 11:59:57.063: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485139241s
Jan 31 11:59:59.082: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503999412s
Jan 31 12:00:01.109: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531309808s
Jan 31 12:00:03.209: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631763696s
Jan 31 12:00:05.240: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66248985s
STEP: Saw pod success
Jan 31 12:00:05.240: INFO: Pod "downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:00:05.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:00:06.682: INFO: Waiting for pod downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 12:00:06.706: INFO: Pod downwardapi-volume-3189a734-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:00:06.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nm2nh" for this suite.
Jan 31 12:00:14.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:00:15.219: INFO: namespace: e2e-tests-projected-nm2nh, resource: bindings, ignored listing per whitelist
Jan 31 12:00:15.222: INFO: namespace e2e-tests-projected-nm2nh deletion completed in 8.497764116s

• [SLOW TEST:20.947 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:00:15.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 31 12:00:15.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cpm79'
Jan 31 12:00:17.343: INFO: stderr: ""
Jan 31 12:00:17.343: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 31 12:00:18.360: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:18.360: INFO: Found 0 / 1
Jan 31 12:00:19.389: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:19.390: INFO: Found 0 / 1
Jan 31 12:00:20.430: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:20.430: INFO: Found 0 / 1
Jan 31 12:00:21.351: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:21.351: INFO: Found 0 / 1
Jan 31 12:00:22.362: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:22.363: INFO: Found 0 / 1
Jan 31 12:00:23.487: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:23.487: INFO: Found 0 / 1
Jan 31 12:00:24.360: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:24.360: INFO: Found 0 / 1
Jan 31 12:00:25.354: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:25.354: INFO: Found 0 / 1
Jan 31 12:00:26.370: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:26.370: INFO: Found 0 / 1
Jan 31 12:00:27.360: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:27.361: INFO: Found 1 / 1
Jan 31 12:00:27.361: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 12:00:27.368: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:00:27.368: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 31 12:00:27.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79'
Jan 31 12:00:27.750: INFO: stderr: ""
Jan 31 12:00:27.750: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Jan 12:00:25.525 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 12:00:25.525 # Server started, Redis version 3.2.12\n1:M 31 Jan 12:00:25.526 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 12:00:25.526 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 31 12:00:27.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79 --tail=1'
Jan 31 12:00:27.983: INFO: stderr: ""
Jan 31 12:00:27.983: INFO: stdout: "1:M 31 Jan 12:00:25.526 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 31 12:00:27.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79 --limit-bytes=1'
Jan 31 12:00:28.210: INFO: stderr: ""
Jan 31 12:00:28.210: INFO: stdout: " "
STEP: exposing timestamps
Jan 31 12:00:28.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79 --tail=1 --timestamps'
Jan 31 12:00:28.365: INFO: stderr: ""
Jan 31 12:00:28.365: INFO: stdout: "2020-01-31T12:00:25.531171272Z 1:M 31 Jan 12:00:25.526 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 31 12:00:30.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79 --since=1s'
Jan 31 12:00:31.097: INFO: stderr: ""
Jan 31 12:00:31.098: INFO: stdout: ""
Jan 31 12:00:31.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6dnxs redis-master --namespace=e2e-tests-kubectl-cpm79 --since=24h'
Jan 31 12:00:31.241: INFO: stderr: ""
Jan 31 12:00:31.241: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Jan 12:00:25.525 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 12:00:25.525 # Server started, Redis version 3.2.12\n1:M 31 Jan 12:00:25.526 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 12:00:25.526 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 31 12:00:31.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cpm79'
Jan 31 12:00:31.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:00:31.417: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 31 12:00:31.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-cpm79'
Jan 31 12:00:31.597: INFO: stderr: "No resources found.\n"
Jan 31 12:00:31.597: INFO: stdout: ""
Jan 31 12:00:31.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-cpm79 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 12:00:31.775: INFO: stderr: ""
Jan 31 12:00:31.776: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:00:31.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cpm79" for this suite.
Jan 31 12:00:37.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:00:38.041: INFO: namespace: e2e-tests-kubectl-cpm79, resource: bindings, ignored listing per whitelist
Jan 31 12:00:38.061: INFO: namespace e2e-tests-kubectl-cpm79 deletion completed in 6.255074313s

• [SLOW TEST:22.838 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:00:38.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 12:00:38.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:00:38.431: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 12:00:38.431: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 31 12:00:38.446: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 31 12:00:38.592: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 31 12:00:38.644: INFO: scanned /root for discovery docs: 
Jan 31 12:00:38.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:01:04.499: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 12:01:04.500: INFO: stdout: "Created e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1\nScaling up e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 31 12:01:04.500: INFO: stdout: "Created e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1\nScaling up e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 31 12:01:04.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:01:04.884: INFO: stderr: ""
Jan 31 12:01:04.884: INFO: stdout: "e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1-g2k47 "
Jan 31 12:01:04.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1-g2k47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:01:05.057: INFO: stderr: ""
Jan 31 12:01:05.057: INFO: stdout: "true"
Jan 31 12:01:05.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1-g2k47 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:01:05.160: INFO: stderr: ""
Jan 31 12:01:05.160: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 31 12:01:05.160: INFO: e2e-test-nginx-rc-a13efe91d1d4863162f4a3d46f1adba1-g2k47 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 31 12:01:05.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jlrng'
Jan 31 12:01:05.268: INFO: stderr: ""
Jan 31 12:01:05.269: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:01:05.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jlrng" for this suite.
Jan 31 12:01:29.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:01:29.586: INFO: namespace: e2e-tests-kubectl-jlrng, resource: bindings, ignored listing per whitelist
Jan 31 12:01:29.681: INFO: namespace e2e-tests-kubectl-jlrng deletion completed in 24.242897696s

• [SLOW TEST:51.620 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:01:29.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 31 12:01:29.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:30.229: INFO: stderr: ""
Jan 31 12:01:30.229: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 12:01:30.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:30.567: INFO: stderr: ""
Jan 31 12:01:30.567: INFO: stdout: "update-demo-nautilus-fbdp4 update-demo-nautilus-xvtp4 "
Jan 31 12:01:30.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbdp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:30.741: INFO: stderr: ""
Jan 31 12:01:30.741: INFO: stdout: ""
Jan 31 12:01:30.741: INFO: update-demo-nautilus-fbdp4 is created but not running
Jan 31 12:01:35.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:35.942: INFO: stderr: ""
Jan 31 12:01:35.942: INFO: stdout: "update-demo-nautilus-fbdp4 update-demo-nautilus-xvtp4 "
Jan 31 12:01:35.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbdp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:36.095: INFO: stderr: ""
Jan 31 12:01:36.095: INFO: stdout: ""
Jan 31 12:01:36.095: INFO: update-demo-nautilus-fbdp4 is created but not running
Jan 31 12:01:41.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:41.191: INFO: stderr: ""
Jan 31 12:01:41.191: INFO: stdout: "update-demo-nautilus-fbdp4 update-demo-nautilus-xvtp4 "
Jan 31 12:01:41.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbdp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:41.272: INFO: stderr: ""
Jan 31 12:01:41.273: INFO: stdout: ""
Jan 31 12:01:41.273: INFO: update-demo-nautilus-fbdp4 is created but not running
Jan 31 12:01:46.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:46.540: INFO: stderr: ""
Jan 31 12:01:46.540: INFO: stdout: "update-demo-nautilus-fbdp4 update-demo-nautilus-xvtp4 "
Jan 31 12:01:46.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbdp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:46.747: INFO: stderr: ""
Jan 31 12:01:46.747: INFO: stdout: "true"
Jan 31 12:01:46.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fbdp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:46.868: INFO: stderr: ""
Jan 31 12:01:46.869: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:01:46.869: INFO: validating pod update-demo-nautilus-fbdp4
Jan 31 12:01:46.944: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:01:46.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:01:46.944: INFO: update-demo-nautilus-fbdp4 is verified up and running
Jan 31 12:01:46.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:47.115: INFO: stderr: ""
Jan 31 12:01:47.115: INFO: stdout: "true"
Jan 31 12:01:47.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:47.232: INFO: stderr: ""
Jan 31 12:01:47.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:01:47.232: INFO: validating pod update-demo-nautilus-xvtp4
Jan 31 12:01:47.246: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:01:47.246: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:01:47.246: INFO: update-demo-nautilus-xvtp4 is verified up and running
STEP: scaling down the replication controller
Jan 31 12:01:47.248: INFO: scanned /root for discovery docs: 
Jan 31 12:01:47.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:48.526: INFO: stderr: ""
Jan 31 12:01:48.526: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 12:01:48.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:48.839: INFO: stderr: ""
Jan 31 12:01:48.840: INFO: stdout: "update-demo-nautilus-fbdp4 update-demo-nautilus-xvtp4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 31 12:01:53.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:54.124: INFO: stderr: ""
Jan 31 12:01:54.125: INFO: stdout: "update-demo-nautilus-xvtp4 "
Jan 31 12:01:54.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:54.229: INFO: stderr: ""
Jan 31 12:01:54.229: INFO: stdout: "true"
Jan 31 12:01:54.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:54.334: INFO: stderr: ""
Jan 31 12:01:54.334: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:01:54.334: INFO: validating pod update-demo-nautilus-xvtp4
Jan 31 12:01:54.343: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:01:54.343: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:01:54.343: INFO: update-demo-nautilus-xvtp4 is verified up and running
STEP: scaling up the replication controller
Jan 31 12:01:54.347: INFO: scanned /root for discovery docs: 
Jan 31 12:01:54.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:55.986: INFO: stderr: ""
Jan 31 12:01:55.987: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 12:01:55.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:56.263: INFO: stderr: ""
Jan 31 12:01:56.263: INFO: stdout: "update-demo-nautilus-7kpww update-demo-nautilus-xvtp4 "
Jan 31 12:01:56.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kpww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:01:56.404: INFO: stderr: ""
Jan 31 12:01:56.404: INFO: stdout: ""
Jan 31 12:01:56.404: INFO: update-demo-nautilus-7kpww is created but not running
Jan 31 12:02:01.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:01.680: INFO: stderr: ""
Jan 31 12:02:01.680: INFO: stdout: "update-demo-nautilus-7kpww update-demo-nautilus-xvtp4 "
Jan 31 12:02:01.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kpww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:01.845: INFO: stderr: ""
Jan 31 12:02:01.845: INFO: stdout: ""
Jan 31 12:02:01.845: INFO: update-demo-nautilus-7kpww is created but not running
Jan 31 12:02:06.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.096: INFO: stderr: ""
Jan 31 12:02:07.096: INFO: stdout: "update-demo-nautilus-7kpww update-demo-nautilus-xvtp4 "
Jan 31 12:02:07.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kpww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.238: INFO: stderr: ""
Jan 31 12:02:07.238: INFO: stdout: "true"
Jan 31 12:02:07.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kpww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.338: INFO: stderr: ""
Jan 31 12:02:07.338: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:02:07.338: INFO: validating pod update-demo-nautilus-7kpww
Jan 31 12:02:07.355: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:02:07.355: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:02:07.355: INFO: update-demo-nautilus-7kpww is verified up and running
Jan 31 12:02:07.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.502: INFO: stderr: ""
Jan 31 12:02:07.502: INFO: stdout: "true"
Jan 31 12:02:07.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvtp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.698: INFO: stderr: ""
Jan 31 12:02:07.698: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:02:07.698: INFO: validating pod update-demo-nautilus-xvtp4
Jan 31 12:02:07.734: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:02:07.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:02:07.735: INFO: update-demo-nautilus-xvtp4 is verified up and running
STEP: using delete to clean up resources
Jan 31 12:02:07.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:07.842: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:02:07.842: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 12:02:07.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-nzhl2'
Jan 31 12:02:08.050: INFO: stderr: "No resources found.\n"
Jan 31 12:02:08.050: INFO: stdout: ""
Jan 31 12:02:08.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-nzhl2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 12:02:08.195: INFO: stderr: ""
Jan 31 12:02:08.195: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:02:08.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nzhl2" for this suite.
Jan 31 12:02:32.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:02:32.301: INFO: namespace: e2e-tests-kubectl-nzhl2, resource: bindings, ignored listing per whitelist
Jan 31 12:02:32.454: INFO: namespace e2e-tests-kubectl-nzhl2 deletion completed in 24.234808581s

• [SLOW TEST:62.773 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:02:32.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 31 12:02:32.791: INFO: Waiting up to 5m0s for pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-56x7k" to be "success or failure"
Jan 31 12:02:32.806: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.488393ms
Jan 31 12:02:34.949: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157826269s
Jan 31 12:02:36.970: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179293533s
Jan 31 12:02:39.460: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668441201s
Jan 31 12:02:41.475: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.684306003s
Jan 31 12:02:43.489: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.698339214s
STEP: Saw pod success
Jan 31 12:02:43.490: INFO: Pod "downward-api-8fee4943-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:02:43.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8fee4943-4421-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 12:02:44.398: INFO: Waiting for pod downward-api-8fee4943-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 12:02:44.428: INFO: Pod downward-api-8fee4943-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:02:44.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-56x7k" for this suite.
Jan 31 12:02:50.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:02:50.882: INFO: namespace: e2e-tests-downward-api-56x7k, resource: bindings, ignored listing per whitelist
Jan 31 12:02:50.963: INFO: namespace e2e-tests-downward-api-56x7k deletion completed in 6.369528983s

• [SLOW TEST:18.508 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:02:50.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-9af8873f-4421-11ea-aae6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-9af887c0-4421-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9af8873f-4421-11ea-aae6-0242ac110005
STEP: Updating secret s-test-opt-upd-9af887c0-4421-11ea-aae6-0242ac110005
STEP: Creating secret with name s-test-opt-create-9af887ea-4421-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:04:15.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-66pm5" for this suite.
Jan 31 12:04:39.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:04:39.523: INFO: namespace: e2e-tests-secrets-66pm5, resource: bindings, ignored listing per whitelist
Jan 31 12:04:39.605: INFO: namespace e2e-tests-secrets-66pm5 deletion completed in 24.228219795s

• [SLOW TEST:108.642 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:04:39.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 31 12:04:40.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-s8dd7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 31 12:04:49.030: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0131 12:04:47.691677    2735 log.go:172] (0xc0001380b0) (0xc000326140) Create stream\nI0131 12:04:47.691988    2735 log.go:172] (0xc0001380b0) (0xc000326140) Stream added, broadcasting: 1\nI0131 12:04:47.697983    2735 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0131 12:04:47.698046    2735 log.go:172] (0xc0001380b0) (0xc0006b25a0) Create stream\nI0131 12:04:47.698063    2735 log.go:172] (0xc0001380b0) (0xc0006b25a0) Stream added, broadcasting: 3\nI0131 12:04:47.701066    2735 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0131 12:04:47.701158    2735 log.go:172] (0xc0001380b0) (0xc0000d2000) Create stream\nI0131 12:04:47.701193    2735 log.go:172] (0xc0001380b0) (0xc0000d2000) Stream added, broadcasting: 5\nI0131 12:04:47.703755    2735 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0131 12:04:47.703797    2735 log.go:172] (0xc0001380b0) (0xc0003261e0) Create stream\nI0131 12:04:47.703812    2735 log.go:172] (0xc0001380b0) (0xc0003261e0) Stream added, broadcasting: 7\nI0131 12:04:47.706542    2735 log.go:172] (0xc0001380b0) Reply frame received for 7\nI0131 12:04:47.707012    2735 log.go:172] (0xc0006b25a0) (3) Writing data frame\nI0131 12:04:47.707274    2735 log.go:172] (0xc0006b25a0) (3) Writing data frame\nI0131 12:04:47.715714    2735 log.go:172] (0xc0001380b0) Data frame received for 5\nI0131 12:04:47.715734    2735 log.go:172] (0xc0000d2000) (5) Data frame handling\nI0131 12:04:47.715750    2735 log.go:172] (0xc0000d2000) (5) Data frame sent\nI0131 12:04:47.717461    2735 log.go:172] (0xc0001380b0) Data frame received for 5\nI0131 12:04:47.717536    2735 log.go:172] (0xc0000d2000) (5) Data frame handling\nI0131 12:04:47.717568    2735 log.go:172] (0xc0000d2000) (5) Data frame sent\nI0131 12:04:48.938313    2735 log.go:172] (0xc0001380b0) (0xc0003261e0) Stream removed, broadcasting: 7\nI0131 12:04:48.938893    2735 log.go:172] (0xc0001380b0) Data frame received for 1\nI0131 12:04:48.938997    2735 log.go:172] (0xc0001380b0) (0xc0000d2000) Stream removed, broadcasting: 5\nI0131 12:04:48.939079    2735 log.go:172] (0xc000326140) (1) Data frame handling\nI0131 12:04:48.939153    2735 log.go:172] (0xc000326140) (1) Data frame sent\nI0131 12:04:48.939217    2735 log.go:172] (0xc0001380b0) (0xc000326140) Stream removed, broadcasting: 1\nI0131 12:04:48.939335    2735 log.go:172] (0xc0001380b0) (0xc0006b25a0) Stream removed, broadcasting: 3\nI0131 12:04:48.939534    2735 log.go:172] (0xc0001380b0) Go away received\nI0131 12:04:48.939955    2735 log.go:172] (0xc0001380b0) (0xc000326140) Stream removed, broadcasting: 1\nI0131 12:04:48.940119    2735 log.go:172] (0xc0001380b0) (0xc0006b25a0) Stream removed, broadcasting: 3\nI0131 12:04:48.940153    2735 log.go:172] (0xc0001380b0) (0xc0000d2000) Stream removed, broadcasting: 5\nI0131 12:04:48.940164    2735 log.go:172] (0xc0001380b0) (0xc0003261e0) Stream removed, broadcasting: 7\n"
Jan 31 12:04:49.030: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:04:51.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s8dd7" for this suite.
Jan 31 12:04:57.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:04:57.708: INFO: namespace: e2e-tests-kubectl-s8dd7, resource: bindings, ignored listing per whitelist
Jan 31 12:04:57.870: INFO: namespace e2e-tests-kubectl-s8dd7 deletion completed in 6.811923518s

• [SLOW TEST:18.265 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:04:57.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e68ae13a-4421-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:04:58.116: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-zlm5v" to be "success or failure"
Jan 31 12:04:58.124: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.84416ms
Jan 31 12:05:00.195: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079397714s
Jan 31 12:05:02.211: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094776404s
Jan 31 12:05:04.324: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208047169s
Jan 31 12:05:06.349: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.233520527s
Jan 31 12:05:08.517: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.401254156s
STEP: Saw pod success
Jan 31 12:05:08.518: INFO: Pod "pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:05:08.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 12:05:08.693: INFO: Waiting for pod pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005 to disappear
Jan 31 12:05:08.718: INFO: Pod pod-projected-configmaps-e68e30a2-4421-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:05:08.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zlm5v" for this suite.
Jan 31 12:05:14.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:05:15.007: INFO: namespace: e2e-tests-projected-zlm5v, resource: bindings, ignored listing per whitelist
Jan 31 12:05:15.012: INFO: namespace e2e-tests-projected-zlm5v deletion completed in 6.263174343s

• [SLOW TEST:17.142 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:05:15.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:06:15.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zj4jt" for this suite.
Jan 31 12:06:39.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:06:39.354: INFO: namespace: e2e-tests-container-probe-zj4jt, resource: bindings, ignored listing per whitelist
Jan 31 12:06:39.499: INFO: namespace e2e-tests-container-probe-zj4jt deletion completed in 24.218079586s

• [SLOW TEST:84.486 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:06:39.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:06:39.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:06:49.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-r9cd9" for this suite.
Jan 31 12:07:33.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:07:33.970: INFO: namespace: e2e-tests-pods-r9cd9, resource: bindings, ignored listing per whitelist
Jan 31 12:07:34.038: INFO: namespace e2e-tests-pods-r9cd9 deletion completed in 44.234627565s

• [SLOW TEST:54.539 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:07:34.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fvvjq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-fvvjq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 12:07:48.402: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.409: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.420: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.429: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.435: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.442: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.452: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.461: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.473: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.481: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.489: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.497: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.511: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.533: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.555: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.575: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.623: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.657: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.692: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.743: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005: the server could not find the requested resource (get pods dns-test-43a23f75-4422-11ea-aae6-0242ac110005)
Jan 31 12:07:48.744: INFO: Lookups using e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-fvvjq.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 31 12:07:54.221: INFO: DNS probes using e2e-tests-dns-fvvjq/dns-test-43a23f75-4422-11ea-aae6-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:07:54.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-fvvjq" for this suite.
Jan 31 12:08:02.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:08:02.703: INFO: namespace: e2e-tests-dns-fvvjq, resource: bindings, ignored listing per whitelist
Jan 31 12:08:02.895: INFO: namespace e2e-tests-dns-fvvjq deletion completed in 8.50839851s

• [SLOW TEST:28.856 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:08:02.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 31 12:08:03.221: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079453,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 12:08:03.222: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079454,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 31 12:08:03.222: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079455,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 31 12:08:13.297: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079469,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 12:08:13.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079470,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 31 12:08:13.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-dtx59,SelfLink:/api/v1/namespaces/e2e-tests-watch-dtx59/configmaps/e2e-watch-test-label-changed,UID:54daee78-4422-11ea-a994-fa163e34d433,ResourceVersion:20079471,Generation:0,CreationTimestamp:2020-01-31 12:08:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:08:13.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-dtx59" for this suite.
Jan 31 12:08:19.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:08:19.657: INFO: namespace: e2e-tests-watch-dtx59, resource: bindings, ignored listing per whitelist
Jan 31 12:08:19.677: INFO: namespace e2e-tests-watch-dtx59 deletion completed in 6.349871698s

• [SLOW TEST:16.782 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:08:19.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 31 12:08:19.862: INFO: Waiting up to 5m0s for pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005" in namespace "e2e-tests-containers-vj9zg" to be "success or failure"
Jan 31 12:08:19.885: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.994562ms
Jan 31 12:08:22.303: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441094795s
Jan 31 12:08:24.338: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475817854s
Jan 31 12:08:26.884: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.021917019s
Jan 31 12:08:28.900: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.037988898s
Jan 31 12:08:30.930: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.068375795s
STEP: Saw pod success
Jan 31 12:08:30.931: INFO: Pod "client-containers-5ecfc294-4422-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:08:30.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5ecfc294-4422-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:08:31.169: INFO: Waiting for pod client-containers-5ecfc294-4422-11ea-aae6-0242ac110005 to disappear
Jan 31 12:08:31.206: INFO: Pod client-containers-5ecfc294-4422-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:08:31.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vj9zg" for this suite.
Jan 31 12:08:39.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:08:39.461: INFO: namespace: e2e-tests-containers-vj9zg, resource: bindings, ignored listing per whitelist
Jan 31 12:08:39.535: INFO: namespace e2e-tests-containers-vj9zg deletion completed in 8.319759425s

• [SLOW TEST:19.858 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:08:39.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 12:08:39.999: INFO: Waiting up to 5m0s for pod "pod-6accd802-4422-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-p6lkd" to be "success or failure"
Jan 31 12:08:40.134: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 133.82915ms
Jan 31 12:08:42.179: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179136619s
Jan 31 12:08:44.195: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195421151s
Jan 31 12:08:46.205: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20481481s
Jan 31 12:08:48.328: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328643919s
Jan 31 12:08:50.348: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.348384469s
STEP: Saw pod success
Jan 31 12:08:50.349: INFO: Pod "pod-6accd802-4422-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:08:50.361: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6accd802-4422-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:08:50.431: INFO: Waiting for pod pod-6accd802-4422-11ea-aae6-0242ac110005 to disappear
Jan 31 12:08:50.445: INFO: Pod pod-6accd802-4422-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:08:50.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p6lkd" for this suite.
Jan 31 12:08:56.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:08:57.051: INFO: namespace: e2e-tests-emptydir-p6lkd, resource: bindings, ignored listing per whitelist
Jan 31 12:08:57.616: INFO: namespace e2e-tests-emptydir-p6lkd deletion completed in 7.116884916s

• [SLOW TEST:18.077 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:08:57.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 31 12:09:08.160: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-758347de-4422-11ea-aae6-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-6x48k", SelfLink:"/api/v1/namespaces/e2e-tests-pods-6x48k/pods/pod-submit-remove-758347de-4422-11ea-aae6-0242ac110005", UID:"7586375b-4422-11ea-a994-fa163e34d433", ResourceVersion:"20079597", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716069337, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"933643417"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-srfdm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00208e6c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-srfdm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f452f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002188ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f45330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f45350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f45358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f4535c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716069338, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716069347, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716069347, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716069337, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001fbd940), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fbd960), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://600f925bd047d5104b13a1d0f12900845a9b842b23504fda0e335f3cf5803465"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:09:22.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6x48k" for this suite.
Jan 31 12:09:30.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:09:30.856: INFO: namespace: e2e-tests-pods-6x48k, resource: bindings, ignored listing per whitelist
Jan 31 12:09:30.941: INFO: namespace e2e-tests-pods-6x48k deletion completed in 8.213750069s

• [SLOW TEST:33.325 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:09:30.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-28kwv/configmap-test-89590d0b-4422-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:09:31.380: INFO: Waiting up to 5m0s for pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-28kwv" to be "success or failure"
Jan 31 12:09:31.406: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.930023ms
Jan 31 12:09:33.428: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047588457s
Jan 31 12:09:35.483: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10246506s
Jan 31 12:09:37.617: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23625922s
Jan 31 12:09:39.631: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250810707s
Jan 31 12:09:41.683: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302174707s
STEP: Saw pod success
Jan 31 12:09:41.683: INFO: Pod "pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:09:41.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005 container env-test: 
STEP: delete the pod
Jan 31 12:09:42.300: INFO: Waiting for pod pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005 to disappear
Jan 31 12:09:42.481: INFO: Pod pod-configmaps-896e00dd-4422-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:09:42.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-28kwv" for this suite.
Jan 31 12:09:48.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:09:48.757: INFO: namespace: e2e-tests-configmap-28kwv, resource: bindings, ignored listing per whitelist
Jan 31 12:09:48.774: INFO: namespace e2e-tests-configmap-28kwv deletion completed in 6.262561354s

• [SLOW TEST:17.832 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:09:48.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 31 12:09:49.010: INFO: Waiting up to 5m0s for pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-p2ts9" to be "success or failure"
Jan 31 12:09:49.022: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.115905ms
Jan 31 12:09:51.034: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024161884s
Jan 31 12:09:53.053: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043414993s
Jan 31 12:09:55.775: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764720572s
Jan 31 12:09:57.791: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781199118s
Jan 31 12:09:59.829: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.819599422s
STEP: Saw pod success
Jan 31 12:09:59.830: INFO: Pod "downward-api-93e4db5b-4422-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:09:59.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-93e4db5b-4422-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 12:10:00.626: INFO: Waiting for pod downward-api-93e4db5b-4422-11ea-aae6-0242ac110005 to disappear
Jan 31 12:10:00.646: INFO: Pod downward-api-93e4db5b-4422-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:10:00.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p2ts9" for this suite.
Jan 31 12:10:07.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:10:07.293: INFO: namespace: e2e-tests-downward-api-p2ts9, resource: bindings, ignored listing per whitelist
Jan 31 12:10:07.533: INFO: namespace e2e-tests-downward-api-p2ts9 deletion completed in 6.871510288s

• [SLOW TEST:18.759 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:10:07.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 31 12:10:07.682: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 31 12:10:07.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:08.181: INFO: stderr: ""
Jan 31 12:10:08.181: INFO: stdout: "service/redis-slave created\n"
Jan 31 12:10:08.182: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 31 12:10:08.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:08.735: INFO: stderr: ""
Jan 31 12:10:08.735: INFO: stdout: "service/redis-master created\n"
Jan 31 12:10:08.736: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 31 12:10:08.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:09.211: INFO: stderr: ""
Jan 31 12:10:09.211: INFO: stdout: "service/frontend created\n"
Jan 31 12:10:09.212: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 31 12:10:09.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:09.797: INFO: stderr: ""
Jan 31 12:10:09.797: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 31 12:10:09.799: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 12:10:09.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:10.304: INFO: stderr: ""
Jan 31 12:10:10.304: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 31 12:10:10.306: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 31 12:10:10.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:11.209: INFO: stderr: ""
Jan 31 12:10:11.210: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 31 12:10:11.210: INFO: Waiting for all frontend pods to be Running.
Jan 31 12:10:41.264: INFO: Waiting for frontend to serve content.
Jan 31 12:10:43.159: INFO: Trying to add a new entry to the guestbook.
Jan 31 12:10:43.215: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 31 12:10:43.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:45.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:45.458: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 12:10:45.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:45.975: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:45.976: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 12:10:45.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:46.234: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:46.234: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 12:10:46.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:46.418: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:46.418: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 12:10:46.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:46.868: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:46.869: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 12:10:46.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nz4gr'
Jan 31 12:10:47.314: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:10:47.314: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:10:47.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nz4gr" for this suite.
Jan 31 12:11:33.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:11:33.791: INFO: namespace: e2e-tests-kubectl-nz4gr, resource: bindings, ignored listing per whitelist
Jan 31 12:11:33.961: INFO: namespace e2e-tests-kubectl-nz4gr deletion completed in 46.527469168s

• [SLOW TEST:86.427 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:11:33.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:11:34.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-h8pbh" to be "success or failure"
Jan 31 12:11:34.247: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.228656ms
Jan 31 12:11:36.473: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233224734s
Jan 31 12:11:38.513: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273640448s
Jan 31 12:11:40.594: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354406046s
Jan 31 12:11:42.624: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38424527s
Jan 31 12:11:44.655: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.414709711s
STEP: Saw pod success
Jan 31 12:11:44.655: INFO: Pod "downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:11:44.669: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:11:44.863: INFO: Waiting for pod downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005 to disappear
Jan 31 12:11:44.875: INFO: Pod downwardapi-volume-d2ac6653-4422-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:11:44.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h8pbh" for this suite.
Jan 31 12:11:51.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:11:51.117: INFO: namespace: e2e-tests-projected-h8pbh, resource: bindings, ignored listing per whitelist
Jan 31 12:11:51.267: INFO: namespace e2e-tests-projected-h8pbh deletion completed in 6.378459692s

• [SLOW TEST:17.305 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:11:51.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 31 12:12:01.698: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-dd0cacc7-4422-11ea-aae6-0242ac110005,GenerateName:,Namespace:e2e-tests-events-cdq7v,SelfLink:/api/v1/namespaces/e2e-tests-events-cdq7v/pods/send-events-dd0cacc7-4422-11ea-aae6-0242ac110005,UID:dd0f3b08-4422-11ea-a994-fa163e34d433,ResourceVersion:20080080,Generation:0,CreationTimestamp:2020-01-31 12:11:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 639406137,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pjn8b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pjn8b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pjn8b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f45b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:11:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:12:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:12:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:11:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-31 12:11:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-31 12:11:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://16444e85c8812fcdf051cc65ce55ffcf183644518822e6a12e3da596f2172373}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 31 12:12:03.721: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 31 12:12:05.738: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:12:05.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-cdq7v" for this suite.
Jan 31 12:12:45.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:12:46.025: INFO: namespace: e2e-tests-events-cdq7v, resource: bindings, ignored listing per whitelist
Jan 31 12:12:46.074: INFO: namespace e2e-tests-events-cdq7v deletion completed in 40.177574235s

• [SLOW TEST:54.807 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:12:46.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:12:46.260: INFO: Creating ReplicaSet my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005
Jan 31 12:12:46.277: INFO: Pod name my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005: Found 0 pods out of 1
Jan 31 12:12:52.014: INFO: Pod name my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005: Found 1 pods out of 1
Jan 31 12:12:52.014: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005" is running
Jan 31 12:12:56.050: INFO: Pod "my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005-pg2kk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 12:12:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 12:12:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 12:12:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 12:12:46 +0000 UTC Reason: Message:}])
Jan 31 12:12:56.050: INFO: Trying to dial the pod
Jan 31 12:13:01.097: INFO: Controller my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005: Got expected result from replica 1 [my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005-pg2kk]: "my-hostname-basic-fd9b277b-4422-11ea-aae6-0242ac110005-pg2kk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:13:01.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-5gbql" for this suite.
Jan 31 12:13:07.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:13:07.309: INFO: namespace: e2e-tests-replicaset-5gbql, resource: bindings, ignored listing per whitelist
Jan 31 12:13:07.357: INFO: namespace e2e-tests-replicaset-5gbql deletion completed in 6.254199818s

• [SLOW TEST:21.282 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:13:07.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0a50324d-4423-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:13:07.625: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-t5pnh" to be "success or failure"
Jan 31 12:13:07.633: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.821398ms
Jan 31 12:13:10.339: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713495049s
Jan 31 12:13:12.352: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.726257739s
Jan 31 12:13:14.676: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.050868079s
Jan 31 12:13:16.691: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.065405567s
Jan 31 12:13:18.712: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.086424948s
Jan 31 12:13:20.860: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.234980927s
STEP: Saw pod success
Jan 31 12:13:20.861: INFO: Pod "pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:13:20.884: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 12:13:21.148: INFO: Waiting for pod pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005 to disappear
Jan 31 12:13:21.165: INFO: Pod pod-projected-secrets-0a52da1b-4423-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:13:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t5pnh" for this suite.
Jan 31 12:13:27.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:13:27.438: INFO: namespace: e2e-tests-projected-t5pnh, resource: bindings, ignored listing per whitelist
Jan 31 12:13:27.449: INFO: namespace e2e-tests-projected-t5pnh deletion completed in 6.270995469s

• [SLOW TEST:20.092 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:13:27.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-78sf
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 12:13:27.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-78sf" in namespace "e2e-tests-subpath-2dt74" to be "success or failure"
Jan 31 12:13:27.883: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.722035ms
Jan 31 12:13:29.973: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128654275s
Jan 31 12:13:32.039: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195428052s
Jan 31 12:13:34.140: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295550584s
Jan 31 12:13:36.578: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734307719s
Jan 31 12:13:38.673: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.829400106s
Jan 31 12:13:40.732: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.887691305s
Jan 31 12:13:42.744: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 14.899562169s
Jan 31 12:13:44.758: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 16.913959041s
Jan 31 12:13:46.789: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 18.944533168s
Jan 31 12:13:48.807: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 20.962647325s
Jan 31 12:13:50.830: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 22.985596583s
Jan 31 12:13:52.852: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 25.008007617s
Jan 31 12:13:54.875: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 27.031293492s
Jan 31 12:13:56.895: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 29.050633637s
Jan 31 12:13:58.916: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Running", Reason="", readiness=false. Elapsed: 31.0722335s
Jan 31 12:14:00.982: INFO: Pod "pod-subpath-test-downwardapi-78sf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.137890929s
STEP: Saw pod success
Jan 31 12:14:00.982: INFO: Pod "pod-subpath-test-downwardapi-78sf" satisfied condition "success or failure"
Jan 31 12:14:01.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-78sf container test-container-subpath-downwardapi-78sf: 
STEP: delete the pod
Jan 31 12:14:01.297: INFO: Waiting for pod pod-subpath-test-downwardapi-78sf to disappear
Jan 31 12:14:01.312: INFO: Pod pod-subpath-test-downwardapi-78sf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-78sf
Jan 31 12:14:01.313: INFO: Deleting pod "pod-subpath-test-downwardapi-78sf" in namespace "e2e-tests-subpath-2dt74"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:14:01.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-2dt74" for this suite.
Jan 31 12:14:07.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:14:07.479: INFO: namespace: e2e-tests-subpath-2dt74, resource: bindings, ignored listing per whitelist
Jan 31 12:14:07.614: INFO: namespace e2e-tests-subpath-2dt74 deletion completed in 6.276672319s

• [SLOW TEST:40.164 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:14:07.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 31 12:14:08.008: INFO: namespace e2e-tests-kubectl-jtkhn
Jan 31 12:14:08.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jtkhn'
Jan 31 12:14:08.616: INFO: stderr: ""
Jan 31 12:14:08.616: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 31 12:14:09.635: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:09.635: INFO: Found 0 / 1
Jan 31 12:14:10.658: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:10.659: INFO: Found 0 / 1
Jan 31 12:14:11.635: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:11.635: INFO: Found 0 / 1
Jan 31 12:14:12.630: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:12.630: INFO: Found 0 / 1
Jan 31 12:14:13.777: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:13.778: INFO: Found 0 / 1
Jan 31 12:14:14.635: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:14.635: INFO: Found 0 / 1
Jan 31 12:14:15.635: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:15.635: INFO: Found 0 / 1
Jan 31 12:14:16.669: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:16.670: INFO: Found 0 / 1
Jan 31 12:14:17.634: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:17.634: INFO: Found 1 / 1
Jan 31 12:14:17.634: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 12:14:17.640: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 12:14:17.640: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 12:14:17.640: INFO: wait on redis-master startup in e2e-tests-kubectl-jtkhn 
Jan 31 12:14:17.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7ngzw redis-master --namespace=e2e-tests-kubectl-jtkhn'
Jan 31 12:14:17.913: INFO: stderr: ""
Jan 31 12:14:17.913: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Jan 12:14:15.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 12:14:15.852 # Server started, Redis version 3.2.12\n1:M 31 Jan 12:14:15.853 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 12:14:15.853 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 31 12:14:17.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-jtkhn'
Jan 31 12:14:18.171: INFO: stderr: ""
Jan 31 12:14:18.171: INFO: stdout: "service/rm2 exposed\n"
Jan 31 12:14:18.246: INFO: Service rm2 in namespace e2e-tests-kubectl-jtkhn found.
STEP: exposing service
Jan 31 12:14:20.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-jtkhn'
Jan 31 12:14:20.606: INFO: stderr: ""
Jan 31 12:14:20.606: INFO: stdout: "service/rm3 exposed\n"
Jan 31 12:14:20.797: INFO: Service rm3 in namespace e2e-tests-kubectl-jtkhn found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:14:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jtkhn" for this suite.
Jan 31 12:14:44.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:14:44.972: INFO: namespace: e2e-tests-kubectl-jtkhn, resource: bindings, ignored listing per whitelist
Jan 31 12:14:45.075: INFO: namespace e2e-tests-kubectl-jtkhn deletion completed in 22.235951408s

• [SLOW TEST:37.460 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:14:45.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 12:15:05.488: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:05.510: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:07.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:07.619: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:09.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:09.526: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:11.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:11.525: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:13.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:13.523: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:15.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:15.554: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:17.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:17.523: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:19.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:19.521: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:21.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:21.551: INFO: Pod pod-with-poststart-http-hook still exists
Jan 31 12:15:23.510: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 31 12:15:23.519: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:15:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-rh8xb" for this suite.
Jan 31 12:15:47.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:15:47.613: INFO: namespace: e2e-tests-container-lifecycle-hook-rh8xb, resource: bindings, ignored listing per whitelist
Jan 31 12:15:47.722: INFO: namespace e2e-tests-container-lifecycle-hook-rh8xb deletion completed in 24.197344246s

• [SLOW TEST:62.647 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:15:47.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-6gr9
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 12:15:48.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6gr9" in namespace "e2e-tests-subpath-fxl5w" to be "success or failure"
Jan 31 12:15:48.166: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 47.583994ms
Jan 31 12:15:50.259: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140999691s
Jan 31 12:15:52.282: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16387288s
Jan 31 12:15:54.338: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219703218s
Jan 31 12:15:56.408: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289627775s
Jan 31 12:15:58.418: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.299806713s
Jan 31 12:16:00.634: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.515094225s
Jan 31 12:16:02.772: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.653204019s
Jan 31 12:16:04.788: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.669687681s
Jan 31 12:16:06.811: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 18.692568704s
Jan 31 12:16:08.827: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 20.708436792s
Jan 31 12:16:10.843: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 22.72440865s
Jan 31 12:16:12.867: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 24.74905123s
Jan 31 12:16:14.901: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 26.782724301s
Jan 31 12:16:16.934: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 28.81545884s
Jan 31 12:16:18.952: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 30.833594496s
Jan 31 12:16:20.972: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 32.853451492s
Jan 31 12:16:23.017: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 34.898191817s
Jan 31 12:16:25.482: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 37.363298638s
Jan 31 12:16:27.498: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Running", Reason="", readiness=false. Elapsed: 39.379413794s
Jan 31 12:16:29.529: INFO: Pod "pod-subpath-test-projected-6gr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.41095553s
STEP: Saw pod success
Jan 31 12:16:29.530: INFO: Pod "pod-subpath-test-projected-6gr9" satisfied condition "success or failure"
Jan 31 12:16:29.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-6gr9 container test-container-subpath-projected-6gr9: 
STEP: delete the pod
Jan 31 12:16:29.986: INFO: Waiting for pod pod-subpath-test-projected-6gr9 to disappear
Jan 31 12:16:30.284: INFO: Pod pod-subpath-test-projected-6gr9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-6gr9
Jan 31 12:16:30.285: INFO: Deleting pod "pod-subpath-test-projected-6gr9" in namespace "e2e-tests-subpath-fxl5w"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:16:30.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fxl5w" for this suite.
Jan 31 12:16:40.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:16:40.735: INFO: namespace: e2e-tests-subpath-fxl5w, resource: bindings, ignored listing per whitelist
Jan 31 12:16:40.769: INFO: namespace e2e-tests-subpath-fxl5w deletion completed in 10.463119003s

• [SLOW TEST:53.046 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:16:40.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 31 12:16:40.962: INFO: Waiting up to 5m0s for pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005" in namespace "e2e-tests-containers-74dms" to be "success or failure"
Jan 31 12:16:41.010: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.502614ms
Jan 31 12:16:43.024: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061212787s
Jan 31 12:16:45.053: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090148622s
Jan 31 12:16:47.065: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10267098s
Jan 31 12:16:52.658: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.695417891s
Jan 31 12:16:54.674: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.71180097s
Jan 31 12:16:56.704: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.741272027s
STEP: Saw pod success
Jan 31 12:16:56.704: INFO: Pod "client-containers-897e0e02-4423-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:16:56.716: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-897e0e02-4423-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:16:56.913: INFO: Waiting for pod client-containers-897e0e02-4423-11ea-aae6-0242ac110005 to disappear
Jan 31 12:16:56.926: INFO: Pod client-containers-897e0e02-4423-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:16:56.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-74dms" for this suite.
Jan 31 12:17:05.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:17:05.061: INFO: namespace: e2e-tests-containers-74dms, resource: bindings, ignored listing per whitelist
Jan 31 12:17:05.157: INFO: namespace e2e-tests-containers-74dms deletion completed in 8.220918668s

• [SLOW TEST:24.387 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:17:05.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-981651ee-4423-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:17:05.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-mkw4p" to be "success or failure"
Jan 31 12:17:05.504: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.578262ms
Jan 31 12:17:07.520: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054400115s
Jan 31 12:17:09.539: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07365242s
Jan 31 12:17:12.644: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.177875787s
Jan 31 12:17:14.653: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.187562339s
Jan 31 12:17:16.677: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.211574202s
STEP: Saw pod success
Jan 31 12:17:16.678: INFO: Pod "pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:17:16.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 12:17:16.928: INFO: Waiting for pod pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005 to disappear
Jan 31 12:17:16.988: INFO: Pod pod-projected-configmaps-98182258-4423-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:17:16.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mkw4p" for this suite.
Jan 31 12:17:23.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:17:23.181: INFO: namespace: e2e-tests-projected-mkw4p, resource: bindings, ignored listing per whitelist
Jan 31 12:17:23.374: INFO: namespace e2e-tests-projected-mkw4p deletion completed in 6.365660334s

• [SLOW TEST:18.217 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:17:23.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 31 12:17:23.765: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbrgm,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbrgm/configmaps/e2e-watch-test-watch-closed,UID:a2fca7ed-4423-11ea-a994-fa163e34d433,ResourceVersion:20080742,Generation:0,CreationTimestamp:2020-01-31 12:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 31 12:17:23.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbrgm,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbrgm/configmaps/e2e-watch-test-watch-closed,UID:a2fca7ed-4423-11ea-a994-fa163e34d433,ResourceVersion:20080744,Generation:0,CreationTimestamp:2020-01-31 12:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 31 12:17:23.934: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbrgm,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbrgm/configmaps/e2e-watch-test-watch-closed,UID:a2fca7ed-4423-11ea-a994-fa163e34d433,ResourceVersion:20080745,Generation:0,CreationTimestamp:2020-01-31 12:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 12:17:23.935: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbrgm,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbrgm/configmaps/e2e-watch-test-watch-closed,UID:a2fca7ed-4423-11ea-a994-fa163e34d433,ResourceVersion:20080746,Generation:0,CreationTimestamp:2020-01-31 12:17:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:17:23.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hbrgm" for this suite.
Jan 31 12:17:30.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:17:30.183: INFO: namespace: e2e-tests-watch-hbrgm, resource: bindings, ignored listing per whitelist
Jan 31 12:17:30.267: INFO: namespace e2e-tests-watch-hbrgm deletion completed in 6.316855609s

• [SLOW TEST:6.893 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:17:30.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 31 12:17:30.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 31 12:17:30.619: INFO: stderr: ""
Jan 31 12:17:30.620: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:17:30.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cjm7m" for this suite.
Jan 31 12:17:36.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:17:36.896: INFO: namespace: e2e-tests-kubectl-cjm7m, resource: bindings, ignored listing per whitelist
Jan 31 12:17:36.911: INFO: namespace e2e-tests-kubectl-cjm7m deletion completed in 6.251237299s

• [SLOW TEST:6.644 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:17:36.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 12:17:37.121: INFO: Waiting up to 5m0s for pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-cg9bv" to be "success or failure"
Jan 31 12:17:37.125: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549668ms
Jan 31 12:17:39.168: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046370311s
Jan 31 12:17:41.184: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062939626s
Jan 31 12:17:43.198: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077198617s
Jan 31 12:17:45.215: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093341759s
Jan 31 12:17:47.242: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120857241s
Jan 31 12:17:49.504: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.38273148s
STEP: Saw pod success
Jan 31 12:17:49.504: INFO: Pod "pod-aaf6eb7d-4423-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:17:49.519: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aaf6eb7d-4423-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:17:50.033: INFO: Waiting for pod pod-aaf6eb7d-4423-11ea-aae6-0242ac110005 to disappear
Jan 31 12:17:50.053: INFO: Pod pod-aaf6eb7d-4423-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:17:50.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cg9bv" for this suite.
Jan 31 12:17:56.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:17:56.227: INFO: namespace: e2e-tests-emptydir-cg9bv, resource: bindings, ignored listing per whitelist
Jan 31 12:17:56.279: INFO: namespace e2e-tests-emptydir-cg9bv deletion completed in 6.210494554s

• [SLOW TEST:19.367 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:17:56.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jjzc8
Jan 31 12:18:06.616: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jjzc8
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 12:18:06.619: INFO: Initial restart count of pod liveness-http is 0
Jan 31 12:18:23.115: INFO: Restart count of pod e2e-tests-container-probe-jjzc8/liveness-http is now 1 (16.495371575s elapsed)
Jan 31 12:18:43.634: INFO: Restart count of pod e2e-tests-container-probe-jjzc8/liveness-http is now 2 (37.014232869s elapsed)
Jan 31 12:19:04.059: INFO: Restart count of pod e2e-tests-container-probe-jjzc8/liveness-http is now 3 (57.439314152s elapsed)
Jan 31 12:19:24.220: INFO: Restart count of pod e2e-tests-container-probe-jjzc8/liveness-http is now 4 (1m17.600451909s elapsed)
Jan 31 12:20:23.720: INFO: Restart count of pod e2e-tests-container-probe-jjzc8/liveness-http is now 5 (2m17.100603778s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:20:23.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jjzc8" for this suite.
Jan 31 12:20:29.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:20:30.058: INFO: namespace: e2e-tests-container-probe-jjzc8, resource: bindings, ignored listing per whitelist
Jan 31 12:20:30.132: INFO: namespace e2e-tests-container-probe-jjzc8 deletion completed in 6.250024148s

• [SLOW TEST:153.852 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:20:30.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 31 12:20:30.539: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 31 12:20:35.560: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:20:36.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-hfkxt" for this suite.
Jan 31 12:20:42.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:20:43.139: INFO: namespace: e2e-tests-replication-controller-hfkxt, resource: bindings, ignored listing per whitelist
Jan 31 12:20:43.174: INFO: namespace e2e-tests-replication-controller-hfkxt deletion completed in 6.443427003s

• [SLOW TEST:13.042 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:20:43.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:20:47.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-fj54j" for this suite.
Jan 31 12:21:09.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:21:09.334: INFO: namespace: e2e-tests-kubelet-test-fj54j, resource: bindings, ignored listing per whitelist
Jan 31 12:21:09.354: INFO: namespace e2e-tests-kubelet-test-fj54j deletion completed in 22.254127661s

• [SLOW TEST:26.180 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:21:09.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:21:09.585: INFO: Creating deployment "nginx-deployment"
Jan 31 12:21:09.607: INFO: Waiting for observed generation 1
Jan 31 12:21:12.002: INFO: Waiting for all required pods to come up
Jan 31 12:21:13.333: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 31 12:21:57.841: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 31 12:21:57.858: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 31 12:21:57.877: INFO: Updating deployment nginx-deployment
Jan 31 12:21:57.877: INFO: Waiting for observed generation 2
Jan 31 12:22:00.903: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 31 12:22:02.884: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 31 12:22:02.891: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 31 12:22:03.532: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 31 12:22:03.533: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 31 12:22:03.538: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 31 12:22:03.558: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 31 12:22:03.558: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 31 12:22:03.582: INFO: Updating deployment nginx-deployment
Jan 31 12:22:03.583: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 31 12:22:05.621: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 31 12:22:07.813: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 31 12:22:08.745: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-ccljz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccljz/deployments/nginx-deployment,UID:299c89fe-4424-11ea-a994-fa163e34d433,ResourceVersion:20081393,Generation:3,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-31 12:21:58 +0000 UTC 2020-01-31 12:21:09 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-31 12:22:05 +0000 UTC 2020-01-31 12:22:05 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 31 12:22:09.125: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-ccljz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccljz/replicasets/nginx-deployment-5c98f8fb5,UID:46664e72-4424-11ea-a994-fa163e34d433,ResourceVersion:20081388,Generation:3,CreationTimestamp:2020-01-31 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 299c89fe-4424-11ea-a994-fa163e34d433 0xc0017ff487 0xc0017ff488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 12:22:09.125: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 31 12:22:09.125: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-ccljz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ccljz/replicasets/nginx-deployment-85ddf47c5d,UID:29a607f4-4424-11ea-a994-fa163e34d433,ResourceVersion:20081384,Generation:3,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 299c89fe-4424-11ea-a994-fa163e34d433 0xc0017ff5a7 0xc0017ff5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 31 12:22:10.210: INFO: Pod "nginx-deployment-5c98f8fb5-5rjm4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5rjm4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-5rjm4,UID:4c5f742d-4424-11ea-a994-fa163e34d433,ResourceVersion:20081403,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0019157b7 0xc0019157b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001915820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001915840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.210: INFO: Pod "nginx-deployment-5c98f8fb5-5vnbb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5vnbb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-5vnbb,UID:4ce192fd-4424-11ea-a994-fa163e34d433,ResourceVersion:20081414,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0019158b7 0xc0019158b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001915940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001915960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.211: INFO: Pod "nginx-deployment-5c98f8fb5-6cd4x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6cd4x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-6cd4x,UID:4ce02044-4424-11ea-a994-fa163e34d433,ResourceVersion:20081412,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0019159c0 0xc0019159c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001915ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001915ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.211: INFO: Pod "nginx-deployment-5c98f8fb5-944gm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-944gm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-944gm,UID:4ce1066e-4424-11ea-a994-fa163e34d433,ResourceVersion:20081427,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc001915ec0 0xc001915ec1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001915f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001915f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.212: INFO: Pod "nginx-deployment-5c98f8fb5-k2p88" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k2p88,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-k2p88,UID:46774f18-4424-11ea-a994-fa163e34d433,ResourceVersion:20081374,Generation:0,CreationTimestamp:2020-01-31 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc001915fc7 0xc001915fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016ce3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016ce400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:21:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.212: INFO: Pod "nginx-deployment-5c98f8fb5-kjpsq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kjpsq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-kjpsq,UID:4ce14340-4424-11ea-a994-fa163e34d433,ResourceVersion:20081423,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016ce4c7 0xc0016ce4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016ce760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016ce780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.213: INFO: Pod "nginx-deployment-5c98f8fb5-mr6lh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mr6lh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-mr6lh,UID:46d08408-4424-11ea-a994-fa163e34d433,ResourceVersion:20081382,Generation:0,CreationTimestamp:2020-01-31 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016ce7f7 0xc0016ce7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016ce860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016ce970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.213: INFO: Pod "nginx-deployment-5c98f8fb5-qplhs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qplhs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-qplhs,UID:4d208cd9-4424-11ea-a994-fa163e34d433,ResourceVersion:20081429,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cea37 0xc0016cea38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cec10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cec30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.214: INFO: Pod "nginx-deployment-5c98f8fb5-qvnvq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qvnvq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-qvnvq,UID:4c51a772-4424-11ea-a994-fa163e34d433,ResourceVersion:20081421,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cec90 0xc0016cec91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cee40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:22:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.214: INFO: Pod "nginx-deployment-5c98f8fb5-x88hc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x88hc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-x88hc,UID:46736781-4424-11ea-a994-fa163e34d433,ResourceVersion:20081348,Generation:0,CreationTimestamp:2020-01-31 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cf157 0xc0016cf158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cf1c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cf1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:21:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.215: INFO: Pod "nginx-deployment-5c98f8fb5-xgs9p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xgs9p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-xgs9p,UID:46ca3951-4424-11ea-a994-fa163e34d433,ResourceVersion:20081377,Generation:0,CreationTimestamp:2020-01-31 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cf3f7 0xc0016cf3f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cf460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cf480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.215: INFO: Pod "nginx-deployment-5c98f8fb5-xjtdw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xjtdw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-xjtdw,UID:4c5ecbb0-4424-11ea-a994-fa163e34d433,ResourceVersion:20081404,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cf697 0xc0016cf698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cf770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cf790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.216: INFO: Pod "nginx-deployment-5c98f8fb5-z6gvv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z6gvv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-5c98f8fb5-z6gvv,UID:46771add-4424-11ea-a994-fa163e34d433,ResourceVersion:20081371,Generation:0,CreationTimestamp:2020-01-31 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 46664e72-4424-11ea-a994-fa163e34d433 0xc0016cf807 0xc0016cf808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cf8f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cf910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-31 12:21:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.216: INFO: Pod "nginx-deployment-85ddf47c5d-575m8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-575m8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-575m8,UID:4c5f44bc-4424-11ea-a994-fa163e34d433,ResourceVersion:20081405,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc0016cfb17 0xc0016cfb18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016cfb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016cfbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.216: INFO: Pod "nginx-deployment-85ddf47c5d-5f7rf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5f7rf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-5f7rf,UID:29c377a3-4424-11ea-a994-fa163e34d433,ResourceVersion:20081322,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc0016cfe67 0xc0016cfe68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f44010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f44030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-31 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4bb928d5a0db05c8806651e67670fbc26f088f068fba410bb12ca3b5374c5cc6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.217: INFO: Pod "nginx-deployment-85ddf47c5d-5l7b8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5l7b8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-5l7b8,UID:29c36ee8-4424-11ea-a994-fa163e34d433,ResourceVersion:20081310,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f44337 0xc000f44338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f443a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f443c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-31 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cd293d9e021ac13894e70b708d7daf6ab268e38338726806f78383d407c660b1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.217: INFO: Pod "nginx-deployment-85ddf47c5d-8bjtq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8bjtq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-8bjtq,UID:4c51eccf-4424-11ea-a994-fa163e34d433,ResourceVersion:20081398,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f44777 0xc000f44778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f447e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f44950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.218: INFO: Pod "nginx-deployment-85ddf47c5d-9fmx4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9fmx4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-9fmx4,UID:4cdddad7-4424-11ea-a994-fa163e34d433,ResourceVersion:20081415,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f449c7 0xc000f449c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f44a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f44a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.218: INFO: Pod "nginx-deployment-85ddf47c5d-b4zst" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b4zst,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-b4zst,UID:29cc3e66-4424-11ea-a994-fa163e34d433,ResourceVersion:20081318,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f44ba0 0xc000f44ba1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f44c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f44c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-31 12:21:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cec3ee9602d90f3f4f19e7a633654cf09ef66ea826f27c4bd5a850802fd3a1bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.219: INFO: Pod "nginx-deployment-85ddf47c5d-bsd4p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bsd4p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-bsd4p,UID:4d206a62-4424-11ea-a994-fa163e34d433,ResourceVersion:20081430,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f44da7 0xc000f44da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f44e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f44e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.219: INFO: Pod "nginx-deployment-85ddf47c5d-hjj5q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hjj5q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-hjj5q,UID:4c5f84a4-4424-11ea-a994-fa163e34d433,ResourceVersion:20081407,Generation:0,CreationTimestamp:2020-01-31 12:22:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f44f60 0xc000f44f61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f44fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.219: INFO: Pod "nginx-deployment-85ddf47c5d-hq8tz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hq8tz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-hq8tz,UID:4cde75ae-4424-11ea-a994-fa163e34d433,ResourceVersion:20081422,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45087 0xc000f45088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f45100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.219: INFO: Pod "nginx-deployment-85ddf47c5d-jhsxr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jhsxr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-jhsxr,UID:29cbb906-4424-11ea-a994-fa163e34d433,ResourceVersion:20081314,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45197 0xc000f45198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f45210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-31 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5393f2db9c4873506e7ba49fd2892fac82c01f0905a1315ea07796fe8b645296}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.220: INFO: Pod "nginx-deployment-85ddf47c5d-l6ckw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l6ckw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-l6ckw,UID:29b26530-4424-11ea-a994-fa163e34d433,ResourceVersion:20081297,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45327 0xc000f45328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f45390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f453b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-31 12:21:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9895f3ee836c092a8cb0685f7c081564d22cba7664f5d91448eae9b24124184a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.220: INFO: Pod "nginx-deployment-85ddf47c5d-lb8nx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lb8nx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-lb8nx,UID:4d1fd547-4424-11ea-a994-fa163e34d433,ResourceVersion:20081424,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45487 0xc000f45488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f454f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.221: INFO: Pod "nginx-deployment-85ddf47c5d-nhtfs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhtfs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-nhtfs,UID:4d1fde82-4424-11ea-a994-fa163e34d433,ResourceVersion:20081425,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45570 0xc000f45571}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f457f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f45810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.221: INFO: Pod "nginx-deployment-85ddf47c5d-nkmgj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nkmgj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-nkmgj,UID:29ad3e08-4424-11ea-a994-fa163e34d433,ResourceVersion:20081300,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc000f45870 0xc000f45871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f458d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f458f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-31 12:21:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d003ce4c4ca5cc5db1837a1fbce54dd5e479aa283b8c97b1f13d388414f5b62c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.221: INFO: Pod "nginx-deployment-85ddf47c5d-nq28h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nq28h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-nq28h,UID:4cde4c48-4424-11ea-a994-fa163e34d433,ResourceVersion:20081413,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc001794037 0xc001794038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017940a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017940c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.222: INFO: Pod "nginx-deployment-85ddf47c5d-pf79r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pf79r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-pf79r,UID:4d206ecc-4424-11ea-a994-fa163e34d433,ResourceVersion:20081426,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc0017941a0 0xc0017941a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001794200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001794220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.222: INFO: Pod "nginx-deployment-85ddf47c5d-qjsqt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qjsqt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-qjsqt,UID:4cddca1f-4424-11ea-a994-fa163e34d433,ResourceVersion:20081419,Generation:0,CreationTimestamp:2020-01-31 12:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc001794380 0xc001794381}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001794480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017944a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:22:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.222: INFO: Pod "nginx-deployment-85ddf47c5d-wwm57" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wwm57,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-wwm57,UID:29c38934-4424-11ea-a994-fa163e34d433,ResourceVersion:20081281,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc001794517 0xc001794518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001794580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017945a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-31 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9bbc9f914fa9f746237197df1a1b4b7dc463cbbdb6c3b325316220451c11c42f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.223: INFO: Pod "nginx-deployment-85ddf47c5d-x7ww8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x7ww8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-x7ww8,UID:4d20105d-4424-11ea-a994-fa163e34d433,ResourceVersion:20081428,Generation:0,CreationTimestamp:2020-01-31 12:22:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc001794797 0xc001794798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001794800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001794890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 12:22:10.223: INFO: Pod "nginx-deployment-85ddf47c5d-x9p6v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x9p6v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-ccljz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ccljz/pods/nginx-deployment-85ddf47c5d-x9p6v,UID:29b28ab9-4424-11ea-a994-fa163e34d433,ResourceVersion:20081293,Generation:0,CreationTimestamp:2020-01-31 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 29a607f4-4424-11ea-a994-fa163e34d433 0xc0017948f0 0xc0017948f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpqmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpqmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpqmn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001794960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001794980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-31 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e7275114004e10ba1ea95090a0f7a78d29465619e28b4ad68247d78fc21210c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:22:10.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ccljz" for this suite.
Jan 31 12:23:34.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:23:34.816: INFO: namespace: e2e-tests-deployment-ccljz, resource: bindings, ignored listing per whitelist
Jan 31 12:23:34.823: INFO: namespace e2e-tests-deployment-ccljz deletion completed in 1m23.707241548s

• [SLOW TEST:145.468 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:23:34.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:23:37.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 31 12:23:37.766: INFO: stderr: ""
Jan 31 12:23:37.766: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:23:37.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wvp98" for this suite.
Jan 31 12:23:44.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:23:44.308: INFO: namespace: e2e-tests-kubectl-wvp98, resource: bindings, ignored listing per whitelist
Jan 31 12:23:44.380: INFO: namespace e2e-tests-kubectl-wvp98 deletion completed in 6.536992414s

• [SLOW TEST:9.556 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:23:44.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-7s4v7
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-7s4v7
STEP: Deleting pre-stop pod
Jan 31 12:24:11.830: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:24:11.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-7s4v7" for this suite.
Jan 31 12:24:46.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:24:46.134: INFO: namespace: e2e-tests-prestop-7s4v7, resource: bindings, ignored listing per whitelist
Jan 31 12:24:46.235: INFO: namespace e2e-tests-prestop-7s4v7 deletion completed in 34.323433224s

• [SLOW TEST:61.855 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:24:46.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 31 12:24:46.458: INFO: Number of nodes with available pods: 0
Jan 31 12:24:46.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:47.493: INFO: Number of nodes with available pods: 0
Jan 31 12:24:47.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:48.507: INFO: Number of nodes with available pods: 0
Jan 31 12:24:48.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:49.494: INFO: Number of nodes with available pods: 0
Jan 31 12:24:49.494: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:50.507: INFO: Number of nodes with available pods: 0
Jan 31 12:24:50.508: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:53.943: INFO: Number of nodes with available pods: 0
Jan 31 12:24:53.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:54.904: INFO: Number of nodes with available pods: 0
Jan 31 12:24:54.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:55.480: INFO: Number of nodes with available pods: 0
Jan 31 12:24:55.480: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:56.564: INFO: Number of nodes with available pods: 1
Jan 31 12:24:56.564: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 31 12:24:56.641: INFO: Number of nodes with available pods: 0
Jan 31 12:24:56.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:57.666: INFO: Number of nodes with available pods: 0
Jan 31 12:24:57.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:58.685: INFO: Number of nodes with available pods: 0
Jan 31 12:24:58.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:24:59.666: INFO: Number of nodes with available pods: 0
Jan 31 12:24:59.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:00.750: INFO: Number of nodes with available pods: 0
Jan 31 12:25:00.750: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:01.660: INFO: Number of nodes with available pods: 0
Jan 31 12:25:01.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:02.681: INFO: Number of nodes with available pods: 0
Jan 31 12:25:02.681: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:03.666: INFO: Number of nodes with available pods: 0
Jan 31 12:25:03.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:04.665: INFO: Number of nodes with available pods: 0
Jan 31 12:25:04.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:05.656: INFO: Number of nodes with available pods: 0
Jan 31 12:25:05.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:07.039: INFO: Number of nodes with available pods: 0
Jan 31 12:25:07.039: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:07.688: INFO: Number of nodes with available pods: 0
Jan 31 12:25:07.688: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:08.661: INFO: Number of nodes with available pods: 0
Jan 31 12:25:08.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:10.541: INFO: Number of nodes with available pods: 0
Jan 31 12:25:10.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:10.798: INFO: Number of nodes with available pods: 0
Jan 31 12:25:10.798: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:11.666: INFO: Number of nodes with available pods: 0
Jan 31 12:25:11.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:12.704: INFO: Number of nodes with available pods: 0
Jan 31 12:25:12.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 31 12:25:13.673: INFO: Number of nodes with available pods: 1
Jan 31 12:25:13.673: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lgm2g, will wait for the garbage collector to delete the pods
Jan 31 12:25:13.788: INFO: Deleting DaemonSet.extensions daemon-set took: 47.844722ms
Jan 31 12:25:13.989: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.828862ms
Jan 31 12:25:20.962: INFO: Number of nodes with available pods: 0
Jan 31 12:25:20.962: INFO: Number of running nodes: 0, number of available pods: 0
Jan 31 12:25:20.969: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lgm2g/daemonsets","resourceVersion":"20081951"},"items":null}

Jan 31 12:25:20.972: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lgm2g/pods","resourceVersion":"20081951"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:25:20.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lgm2g" for this suite.
Jan 31 12:25:27.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:25:27.177: INFO: namespace: e2e-tests-daemonsets-lgm2g, resource: bindings, ignored listing per whitelist
Jan 31 12:25:27.189: INFO: namespace e2e-tests-daemonsets-lgm2g deletion completed in 6.200622111s

• [SLOW TEST:40.954 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:25:27.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 12:25:27.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-mpwwj'
Jan 31 12:25:29.163: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 12:25:29.163: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 31 12:25:33.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-mpwwj'
Jan 31 12:25:33.365: INFO: stderr: ""
Jan 31 12:25:33.365: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:25:33.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mpwwj" for this suite.
Jan 31 12:25:39.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:25:39.563: INFO: namespace: e2e-tests-kubectl-mpwwj, resource: bindings, ignored listing per whitelist
Jan 31 12:25:39.607: INFO: namespace e2e-tests-kubectl-mpwwj deletion completed in 6.229297404s

• [SLOW TEST:12.418 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:25:39.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:25:39.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-jk6n5" to be "success or failure"
Jan 31 12:25:39.818: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.276064ms
Jan 31 12:25:41.898: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094985283s
Jan 31 12:25:43.943: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139913866s
Jan 31 12:25:45.955: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151458622s
Jan 31 12:25:48.080: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.276922316s
Jan 31 12:25:50.101: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297414447s
STEP: Saw pod success
Jan 31 12:25:50.101: INFO: Pod "downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:25:50.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:25:51.005: INFO: Waiting for pod downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005 to disappear
Jan 31 12:25:51.020: INFO: Pod downwardapi-volume-caaae9d6-4424-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:25:51.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jk6n5" for this suite.
Jan 31 12:25:57.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:25:57.289: INFO: namespace: e2e-tests-projected-jk6n5, resource: bindings, ignored listing per whitelist
Jan 31 12:25:57.306: INFO: namespace e2e-tests-projected-jk6n5 deletion completed in 6.27278767s

• [SLOW TEST:17.699 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:25:57.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:25:57.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-pxmm9" to be "success or failure"
Jan 31 12:25:57.644: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.608118ms
Jan 31 12:26:00.237: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603599262s
Jan 31 12:26:02.261: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.627869363s
Jan 31 12:26:04.342: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708131907s
Jan 31 12:26:06.354: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720062724s
Jan 31 12:26:08.385: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.75092304s
STEP: Saw pod success
Jan 31 12:26:08.385: INFO: Pod "downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:26:08.398: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:26:09.180: INFO: Waiting for pod downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005 to disappear
Jan 31 12:26:09.425: INFO: Pod downwardapi-volume-d54890ae-4424-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:26:09.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pxmm9" for this suite.
Jan 31 12:26:15.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:26:15.843: INFO: namespace: e2e-tests-projected-pxmm9, resource: bindings, ignored listing per whitelist
Jan 31 12:26:15.949: INFO: namespace e2e-tests-projected-pxmm9 deletion completed in 6.513135622s

• [SLOW TEST:18.642 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:26:15.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 12:26:16.111: INFO: Waiting up to 5m0s for pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-fjfrv" to be "success or failure"
Jan 31 12:26:16.231: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 119.686232ms
Jan 31 12:26:18.914: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.80331459s
Jan 31 12:26:20.933: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.821523641s
Jan 31 12:26:23.364: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.25332628s
Jan 31 12:26:25.384: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.273431129s
Jan 31 12:26:27.651: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.53960216s
Jan 31 12:26:29.670: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.558769448s
STEP: Saw pod success
Jan 31 12:26:29.670: INFO: Pod "pod-e04ee0a1-4424-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:26:29.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e04ee0a1-4424-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:26:29.890: INFO: Waiting for pod pod-e04ee0a1-4424-11ea-aae6-0242ac110005 to disappear
Jan 31 12:26:29.932: INFO: Pod pod-e04ee0a1-4424-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:26:29.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fjfrv" for this suite.
Jan 31 12:26:35.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:26:36.108: INFO: namespace: e2e-tests-emptydir-fjfrv, resource: bindings, ignored listing per whitelist
Jan 31 12:26:36.183: INFO: namespace e2e-tests-emptydir-fjfrv deletion completed in 6.229150264s

• [SLOW TEST:20.234 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:26:36.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:26:46.665: INFO: Waiting up to 5m0s for pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005" in namespace "e2e-tests-pods-7wwg9" to be "success or failure"
Jan 31 12:26:46.774: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 109.069729ms
Jan 31 12:26:48.794: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128874486s
Jan 31 12:26:51.647: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.98224842s
Jan 31 12:26:53.665: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.000269484s
Jan 31 12:26:55.687: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.02179206s
Jan 31 12:26:57.697: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.032084791s
STEP: Saw pod success
Jan 31 12:26:57.697: INFO: Pod "client-envvars-f279ff61-4424-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:26:57.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-f279ff61-4424-11ea-aae6-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 31 12:26:58.622: INFO: Waiting for pod client-envvars-f279ff61-4424-11ea-aae6-0242ac110005 to disappear
Jan 31 12:26:58.745: INFO: Pod client-envvars-f279ff61-4424-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:26:58.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7wwg9" for this suite.
Jan 31 12:27:43.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:27:43.188: INFO: namespace: e2e-tests-pods-7wwg9, resource: bindings, ignored listing per whitelist
Jan 31 12:27:43.302: INFO: namespace e2e-tests-pods-7wwg9 deletion completed in 44.528556344s

• [SLOW TEST:67.119 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:27:43.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 31 12:27:53.705: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan 31 12:29:25.744: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-xg9sk".
STEP: Found 0 events.
Jan 31 12:29:25.769: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 31 12:29:25.769: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:27:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:28:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:28:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:27:53 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:23:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:23:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 31 12:29:25.769: INFO: 
Jan 31 12:29:25.774: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 31 12:29:25.779: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:20082436,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-31 12:29:23 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-31 12:29:23 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-31 12:29:23 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-31 12:29:23 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 31 12:29:25.780: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 31 12:29:25.786: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 31 12:29:25.807: INFO: test-pod-uninitialized started at 2020-01-31 12:27:53 +0000 UTC (0+1 container statuses recorded)
Jan 31 12:29:25.807: INFO: 	Container nginx ready: true, restart count 0
Jan 31 12:29:25.807: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 31 12:29:25.807: INFO: 	Container weave ready: true, restart count 0
Jan 31 12:29:25.807: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 12:29:25.807: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 31 12:29:25.807: INFO: 	Container coredns ready: true, restart count 0
Jan 31 12:29:25.807: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 31 12:29:25.807: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 31 12:29:25.807: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 31 12:29:25.807: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 31 12:29:25.807: INFO: 	Container coredns ready: true, restart count 0
Jan 31 12:29:25.807: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 31 12:29:25.807: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 12:29:25.807: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0131 12:29:25.813772       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 12:29:25.913: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 31 12:29:25.914: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m56.331978s}
Jan 31 12:29:25.914: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m56.331978s}
Jan 31 12:29:25.914: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m32.774449s}
Jan 31 12:29:25.914: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m18.465258s}
Jan 31 12:29:25.914: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m17.717121s}
Jan 31 12:29:25.914: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:1m6.982636s}
Jan 31 12:29:25.914: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:1m2.933682s}
Jan 31 12:29:25.914: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:59.921295s}
Jan 31 12:29:25.914: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:42.416224s}
Jan 31 12:29:25.914: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:27.546894s}
Jan 31 12:29:25.914: INFO: {Operation:inspect_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.853309s}
Jan 31 12:29:25.914: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:16.563345s}
Jan 31 12:29:25.914: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:16.021596s}
Jan 31 12:29:25.914: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.050883s}
Jan 31 12:29:25.914: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:11.958728s}
Jan 31 12:29:25.914: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:11.843775s}
Jan 31 12:29:25.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-xg9sk" for this suite.
Jan 31 12:29:32.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:29:32.042: INFO: namespace: e2e-tests-namespaces-xg9sk, resource: bindings, ignored listing per whitelist
Jan 31 12:29:32.140: INFO: namespace e2e-tests-namespaces-xg9sk deletion completed in 6.215849393s
STEP: Destroying namespace "e2e-tests-nsdeletetest-zsxvq" for this suite.
Jan 31 12:29:32.149: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-zsxvq": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-zsxvq": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-zsxvq\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc00255f440), Code:409}})

• Failure [108.848 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000a18b0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:29:32.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-554ab9c6-4425-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:29:32.390: INFO: Waiting up to 5m0s for pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-g6kgl" to be "success or failure"
Jan 31 12:29:32.397: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664877ms
Jan 31 12:29:34.580: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189672081s
Jan 31 12:29:36.606: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215796972s
Jan 31 12:29:38.637: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246113813s
Jan 31 12:29:40.658: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.267563273s
Jan 31 12:29:42.680: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289474914s
STEP: Saw pod success
Jan 31 12:29:42.680: INFO: Pod "pod-secrets-554c019b-4425-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:29:42.691: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-554c019b-4425-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 12:29:42.795: INFO: Waiting for pod pod-secrets-554c019b-4425-11ea-aae6-0242ac110005 to disappear
Jan 31 12:29:42.801: INFO: Pod pod-secrets-554c019b-4425-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:29:42.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-g6kgl" for this suite.
Jan 31 12:29:48.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:29:49.072: INFO: namespace: e2e-tests-secrets-g6kgl, resource: bindings, ignored listing per whitelist
Jan 31 12:29:49.077: INFO: namespace e2e-tests-secrets-g6kgl deletion completed in 6.26283214s

• [SLOW TEST:16.927 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:29:49.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5f6580b4-4425-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:30:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jh6bz" for this suite.
Jan 31 12:30:25.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:30:25.748: INFO: namespace: e2e-tests-configmap-jh6bz, resource: bindings, ignored listing per whitelist
Jan 31 12:30:25.788: INFO: namespace e2e-tests-configmap-jh6bz deletion completed in 24.348936934s

• [SLOW TEST:36.711 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:30:25.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:30:36.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hzx2v" for this suite.
Jan 31 12:31:18.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:31:18.603: INFO: namespace: e2e-tests-kubelet-test-hzx2v, resource: bindings, ignored listing per whitelist
Jan 31 12:31:18.699: INFO: namespace e2e-tests-kubelet-test-hzx2v deletion completed in 42.447692939s

• [SLOW TEST:52.910 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:31:18.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-94c6739e-4425-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:31:18.895: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-2s4wj" to be "success or failure"
Jan 31 12:31:18.977: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.224216ms
Jan 31 12:31:20.996: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100320858s
Jan 31 12:31:23.012: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116271856s
Jan 31 12:31:25.469: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573214985s
Jan 31 12:31:27.495: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599045078s
Jan 31 12:31:29.509: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.613097186s
STEP: Saw pod success
Jan 31 12:31:29.509: INFO: Pod "pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:31:29.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 12:31:29.601: INFO: Waiting for pod pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005 to disappear
Jan 31 12:31:30.406: INFO: Pod pod-projected-configmaps-94c73374-4425-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:31:30.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2s4wj" for this suite.
Jan 31 12:31:36.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:31:36.919: INFO: namespace: e2e-tests-projected-2s4wj, resource: bindings, ignored listing per whitelist
Jan 31 12:31:36.931: INFO: namespace e2e-tests-projected-2s4wj deletion completed in 6.515367696s

• [SLOW TEST:18.232 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:31:36.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:31:37.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:31:49.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7wt28" for this suite.
Jan 31 12:32:35.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:32:35.987: INFO: namespace: e2e-tests-pods-7wt28, resource: bindings, ignored listing per whitelist
Jan 31 12:32:36.064: INFO: namespace e2e-tests-pods-7wt28 deletion completed in 46.246077194s

• [SLOW TEST:59.132 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:32:36.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 31 12:32:36.293: INFO: Waiting up to 5m0s for pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-dsjps" to be "success or failure"
Jan 31 12:32:36.344: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.743173ms
Jan 31 12:32:38.577: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28357674s
Jan 31 12:32:40.609: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315774101s
Jan 31 12:32:42.798: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504379738s
Jan 31 12:32:45.079: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785594845s
Jan 31 12:32:47.099: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.806255686s
STEP: Saw pod success
Jan 31 12:32:47.100: INFO: Pod "pod-c2e9d5a8-4425-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:32:47.108: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c2e9d5a8-4425-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:32:47.969: INFO: Waiting for pod pod-c2e9d5a8-4425-11ea-aae6-0242ac110005 to disappear
Jan 31 12:32:47.976: INFO: Pod pod-c2e9d5a8-4425-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:32:47.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dsjps" for this suite.
Jan 31 12:32:54.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:32:54.069: INFO: namespace: e2e-tests-emptydir-dsjps, resource: bindings, ignored listing per whitelist
Jan 31 12:32:54.228: INFO: namespace e2e-tests-emptydir-dsjps deletion completed in 6.241630849s

• [SLOW TEST:18.163 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:32:54.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-cdbd736c-4425-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:32:54.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-s7jlf" to be "success or failure"
Jan 31 12:32:54.585: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.38855ms
Jan 31 12:32:56.603: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111277224s
Jan 31 12:32:58.638: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146276143s
Jan 31 12:33:00.663: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171405806s
Jan 31 12:33:02.788: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295981827s
Jan 31 12:33:04.804: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.312568564s
Jan 31 12:33:06.930: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.438832397s
STEP: Saw pod success
Jan 31 12:33:06.931: INFO: Pod "pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:33:06.950: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 31 12:33:07.025: INFO: Waiting for pod pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005 to disappear
Jan 31 12:33:07.034: INFO: Pod pod-configmaps-cdbe3ff0-4425-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:33:07.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s7jlf" for this suite.
Jan 31 12:33:13.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:33:13.346: INFO: namespace: e2e-tests-configmap-s7jlf, resource: bindings, ignored listing per whitelist
Jan 31 12:33:13.365: INFO: namespace e2e-tests-configmap-s7jlf deletion completed in 6.266777634s

• [SLOW TEST:19.136 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:33:13.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:33:13.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-pzk5n" to be "success or failure"
Jan 31 12:33:13.830: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.656603ms
Jan 31 12:33:16.193: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409257091s
Jan 31 12:33:18.211: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42750046s
Jan 31 12:33:20.254: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470257172s
Jan 31 12:33:22.270: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486326902s
Jan 31 12:33:24.289: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.504893119s
Jan 31 12:33:26.537: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.753518185s
STEP: Saw pod success
Jan 31 12:33:26.538: INFO: Pod "downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:33:26.594: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:33:26.805: INFO: Waiting for pod downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005 to disappear
Jan 31 12:33:26.817: INFO: Pod downwardapi-volume-d9208ece-4425-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:33:26.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pzk5n" for this suite.
Jan 31 12:33:32.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:33:33.098: INFO: namespace: e2e-tests-projected-pzk5n, resource: bindings, ignored listing per whitelist
Jan 31 12:33:33.118: INFO: namespace e2e-tests-projected-pzk5n deletion completed in 6.293025553s

• [SLOW TEST:19.753 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:33:33.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kvvgs
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 12:33:33.345: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 12:34:11.537: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kvvgs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:34:11.537: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:34:11.599651       8 log.go:172] (0xc0009c8bb0) (0xc0016ec8c0) Create stream
I0131 12:34:11.599787       8 log.go:172] (0xc0009c8bb0) (0xc0016ec8c0) Stream added, broadcasting: 1
I0131 12:34:11.616609       8 log.go:172] (0xc0009c8bb0) Reply frame received for 1
I0131 12:34:11.616682       8 log.go:172] (0xc0009c8bb0) (0xc0024ea140) Create stream
I0131 12:34:11.616692       8 log.go:172] (0xc0009c8bb0) (0xc0024ea140) Stream added, broadcasting: 3
I0131 12:34:11.618346       8 log.go:172] (0xc0009c8bb0) Reply frame received for 3
I0131 12:34:11.618374       8 log.go:172] (0xc0009c8bb0) (0xc0024ea1e0) Create stream
I0131 12:34:11.618382       8 log.go:172] (0xc0009c8bb0) (0xc0024ea1e0) Stream added, broadcasting: 5
I0131 12:34:11.620006       8 log.go:172] (0xc0009c8bb0) Reply frame received for 5
I0131 12:34:12.957381       8 log.go:172] (0xc0009c8bb0) Data frame received for 3
I0131 12:34:12.957615       8 log.go:172] (0xc0024ea140) (3) Data frame handling
I0131 12:34:12.957701       8 log.go:172] (0xc0024ea140) (3) Data frame sent
I0131 12:34:13.138411       8 log.go:172] (0xc0009c8bb0) (0xc0024ea140) Stream removed, broadcasting: 3
I0131 12:34:13.138821       8 log.go:172] (0xc0009c8bb0) Data frame received for 1
I0131 12:34:13.138901       8 log.go:172] (0xc0016ec8c0) (1) Data frame handling
I0131 12:34:13.138944       8 log.go:172] (0xc0016ec8c0) (1) Data frame sent
I0131 12:34:13.138999       8 log.go:172] (0xc0009c8bb0) (0xc0016ec8c0) Stream removed, broadcasting: 1
I0131 12:34:13.139039       8 log.go:172] (0xc0009c8bb0) (0xc0024ea1e0) Stream removed, broadcasting: 5
I0131 12:34:13.139105       8 log.go:172] (0xc0009c8bb0) Go away received
I0131 12:34:13.140718       8 log.go:172] (0xc0009c8bb0) (0xc0016ec8c0) Stream removed, broadcasting: 1
I0131 12:34:13.141009       8 log.go:172] (0xc0009c8bb0) (0xc0024ea140) Stream removed, broadcasting: 3
I0131 12:34:13.141023       8 log.go:172] (0xc0009c8bb0) (0xc0024ea1e0) Stream removed, broadcasting: 5
Jan 31 12:34:13.141: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:34:13.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-kvvgs" for this suite.
Jan 31 12:34:37.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:34:37.379: INFO: namespace: e2e-tests-pod-network-test-kvvgs, resource: bindings, ignored listing per whitelist
Jan 31 12:34:37.404: INFO: namespace e2e-tests-pod-network-test-kvvgs deletion completed in 24.24046032s

• [SLOW TEST:64.285 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:34:37.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 31 12:34:37.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:38.362: INFO: stderr: ""
Jan 31 12:34:38.362: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 12:34:38.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:38.789: INFO: stderr: ""
Jan 31 12:34:38.789: INFO: stdout: "update-demo-nautilus-c89z7 update-demo-nautilus-wjrxd "
Jan 31 12:34:38.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c89z7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:38.997: INFO: stderr: ""
Jan 31 12:34:38.997: INFO: stdout: ""
Jan 31 12:34:38.997: INFO: update-demo-nautilus-c89z7 is created but not running
Jan 31 12:34:43.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:44.118: INFO: stderr: ""
Jan 31 12:34:44.119: INFO: stdout: "update-demo-nautilus-c89z7 update-demo-nautilus-wjrxd "
Jan 31 12:34:44.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c89z7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:44.224: INFO: stderr: ""
Jan 31 12:34:44.224: INFO: stdout: ""
Jan 31 12:34:44.224: INFO: update-demo-nautilus-c89z7 is created but not running
Jan 31 12:34:49.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:49.332: INFO: stderr: ""
Jan 31 12:34:49.332: INFO: stdout: "update-demo-nautilus-c89z7 update-demo-nautilus-wjrxd "
Jan 31 12:34:49.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c89z7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:49.477: INFO: stderr: ""
Jan 31 12:34:49.477: INFO: stdout: ""
Jan 31 12:34:49.477: INFO: update-demo-nautilus-c89z7 is created but not running
Jan 31 12:34:54.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:54.671: INFO: stderr: ""
Jan 31 12:34:54.671: INFO: stdout: "update-demo-nautilus-c89z7 update-demo-nautilus-wjrxd "
Jan 31 12:34:54.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c89z7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:54.770: INFO: stderr: ""
Jan 31 12:34:54.770: INFO: stdout: "true"
Jan 31 12:34:54.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c89z7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:54.925: INFO: stderr: ""
Jan 31 12:34:54.926: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:34:54.926: INFO: validating pod update-demo-nautilus-c89z7
Jan 31 12:34:54.961: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:34:54.961: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:34:54.961: INFO: update-demo-nautilus-c89z7 is verified up and running
Jan 31 12:34:54.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjrxd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:55.096: INFO: stderr: ""
Jan 31 12:34:55.096: INFO: stdout: "true"
Jan 31 12:34:55.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjrxd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:55.208: INFO: stderr: ""
Jan 31 12:34:55.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 12:34:55.208: INFO: validating pod update-demo-nautilus-wjrxd
Jan 31 12:34:55.222: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 12:34:55.222: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 12:34:55.222: INFO: update-demo-nautilus-wjrxd is verified up and running
STEP: using delete to clean up resources
Jan 31 12:34:55.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:55.372: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 12:34:55.373: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 12:34:55.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hbd5h'
Jan 31 12:34:55.611: INFO: stderr: "No resources found.\n"
Jan 31 12:34:55.611: INFO: stdout: ""
Jan 31 12:34:55.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hbd5h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 12:34:55.760: INFO: stderr: ""
Jan 31 12:34:55.760: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:34:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hbd5h" for this suite.
Jan 31 12:35:19.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:35:19.977: INFO: namespace: e2e-tests-kubectl-hbd5h, resource: bindings, ignored listing per whitelist
Jan 31 12:35:20.023: INFO: namespace e2e-tests-kubectl-hbd5h deletion completed in 24.239580426s

• [SLOW TEST:42.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:35:20.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 31 12:35:20.201: INFO: Waiting up to 5m0s for pod "pod-249aca17-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-vhl6s" to be "success or failure"
Jan 31 12:35:20.218: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.382802ms
Jan 31 12:35:22.378: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177106652s
Jan 31 12:35:24.388: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187190754s
Jan 31 12:35:26.579: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377697018s
Jan 31 12:35:28.790: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588964202s
Jan 31 12:35:30.814: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.613627394s
STEP: Saw pod success
Jan 31 12:35:30.815: INFO: Pod "pod-249aca17-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:35:30.824: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-249aca17-4426-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:35:31.734: INFO: Waiting for pod pod-249aca17-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:35:31.745: INFO: Pod pod-249aca17-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:35:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vhl6s" for this suite.
Jan 31 12:35:37.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:35:37.902: INFO: namespace: e2e-tests-emptydir-vhl6s, resource: bindings, ignored listing per whitelist
Jan 31 12:35:37.943: INFO: namespace e2e-tests-emptydir-vhl6s deletion completed in 6.189379484s

• [SLOW TEST:17.920 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:35:37.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-2f5043d6-4426-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:35:38.169: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-xsppw" to be "success or failure"
Jan 31 12:35:38.195: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.538022ms
Jan 31 12:35:40.208: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039246279s
Jan 31 12:35:42.235: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065745548s
Jan 31 12:35:44.256: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086736238s
Jan 31 12:35:46.612: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443689388s
Jan 31 12:35:48.657: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.488358187s
STEP: Saw pod success
Jan 31 12:35:48.657: INFO: Pod "pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:35:48.669: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 12:35:48.741: INFO: Waiting for pod pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:35:48.747: INFO: Pod pod-projected-secrets-2f51b912-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:35:48.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xsppw" for this suite.
Jan 31 12:35:56.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:35:56.893: INFO: namespace: e2e-tests-projected-xsppw, resource: bindings, ignored listing per whitelist
Jan 31 12:35:57.166: INFO: namespace e2e-tests-projected-xsppw deletion completed in 8.340549104s

• [SLOW TEST:19.224 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:35:57.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 31 12:35:57.390: INFO: Waiting up to 5m0s for pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-sjqbq" to be "success or failure"
Jan 31 12:35:57.396: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.792081ms
Jan 31 12:35:59.407: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016818915s
Jan 31 12:36:01.420: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029261461s
Jan 31 12:36:03.550: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159574482s
Jan 31 12:36:06.379: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.988889053s
Jan 31 12:36:08.410: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.019517107s
STEP: Saw pod success
Jan 31 12:36:08.410: INFO: Pod "downward-api-3ac43998-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:36:08.421: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3ac43998-4426-11ea-aae6-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 31 12:36:08.582: INFO: Waiting for pod downward-api-3ac43998-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:36:08.594: INFO: Pod downward-api-3ac43998-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:36:08.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sjqbq" for this suite.
Jan 31 12:36:16.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:36:16.813: INFO: namespace: e2e-tests-downward-api-sjqbq, resource: bindings, ignored listing per whitelist
Jan 31 12:36:16.975: INFO: namespace e2e-tests-downward-api-sjqbq deletion completed in 8.371431931s

• [SLOW TEST:19.807 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:36:16.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 31 12:36:17.310: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:36:17.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zjtzs" for this suite.
Jan 31 12:36:23.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:36:23.708: INFO: namespace: e2e-tests-kubectl-zjtzs, resource: bindings, ignored listing per whitelist
Jan 31 12:36:23.806: INFO: namespace e2e-tests-kubectl-zjtzs deletion completed in 6.287955408s

• [SLOW TEST:6.832 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:36:23.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:36:24.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-9sxlz" to be "success or failure"
Jan 31 12:36:24.103: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.865941ms
Jan 31 12:36:26.149: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063698498s
Jan 31 12:36:28.170: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084074047s
Jan 31 12:36:30.187: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101005744s
Jan 31 12:36:32.205: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118856655s
Jan 31 12:36:34.841: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755163375s
STEP: Saw pod success
Jan 31 12:36:34.841: INFO: Pod "downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:36:34.848: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:36:35.046: INFO: Waiting for pod downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:36:35.062: INFO: Pod downwardapi-volume-4aaf5fb3-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:36:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9sxlz" for this suite.
Jan 31 12:36:43.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:36:43.279: INFO: namespace: e2e-tests-downward-api-9sxlz, resource: bindings, ignored listing per whitelist
Jan 31 12:36:43.307: INFO: namespace e2e-tests-downward-api-9sxlz deletion completed in 8.237506994s

• [SLOW TEST:19.498 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:36:43.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:36:43.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-j22lw" for this suite.
Jan 31 12:37:05.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:37:05.921: INFO: namespace: e2e-tests-pods-j22lw, resource: bindings, ignored listing per whitelist
Jan 31 12:37:05.927: INFO: namespace e2e-tests-pods-j22lw deletion completed in 22.259872459s

• [SLOW TEST:22.620 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:37:05.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:37:06.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-jqncj" to be "success or failure"
Jan 31 12:37:06.126: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.916492ms
Jan 31 12:37:08.138: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026750576s
Jan 31 12:37:10.180: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068779346s
Jan 31 12:37:12.211: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099390861s
Jan 31 12:37:14.385: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273243904s
Jan 31 12:37:16.400: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.288267179s
Jan 31 12:37:18.419: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.307243254s
STEP: Saw pod success
Jan 31 12:37:18.419: INFO: Pod "downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:37:18.424: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:37:18.591: INFO: Waiting for pod downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:37:18.605: INFO: Pod downwardapi-volume-63bbd85f-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:37:18.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jqncj" for this suite.
Jan 31 12:37:24.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:37:24.868: INFO: namespace: e2e-tests-downward-api-jqncj, resource: bindings, ignored listing per whitelist
Jan 31 12:37:24.997: INFO: namespace e2e-tests-downward-api-jqncj deletion completed in 6.38267985s

• [SLOW TEST:19.070 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:37:24.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 31 12:37:33.933: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6f1a86ec-4426-11ea-aae6-0242ac110005"
Jan 31 12:37:33.933: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6f1a86ec-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-pods-w2n6p" to be "terminated due to deadline exceeded"
Jan 31 12:37:34.002: INFO: Pod "pod-update-activedeadlineseconds-6f1a86ec-4426-11ea-aae6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 68.01257ms
Jan 31 12:37:36.065: INFO: Pod "pod-update-activedeadlineseconds-6f1a86ec-4426-11ea-aae6-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.131452214s
Jan 31 12:37:36.065: INFO: Pod "pod-update-activedeadlineseconds-6f1a86ec-4426-11ea-aae6-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:37:36.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-w2n6p" for this suite.
Jan 31 12:37:44.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:37:44.201: INFO: namespace: e2e-tests-pods-w2n6p, resource: bindings, ignored listing per whitelist
Jan 31 12:37:44.285: INFO: namespace e2e-tests-pods-w2n6p deletion completed in 8.195092841s

• [SLOW TEST:19.287 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:37:44.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0131 12:37:54.707131       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 12:37:54.707: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:37:54.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5kjpq" for this suite.
Jan 31 12:38:00.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:38:00.868: INFO: namespace: e2e-tests-gc-5kjpq, resource: bindings, ignored listing per whitelist
Jan 31 12:38:01.006: INFO: namespace e2e-tests-gc-5kjpq deletion completed in 6.28879534s

• [SLOW TEST:16.721 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:38:01.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-84880a74-4426-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:38:01.141: INFO: Waiting up to 5m0s for pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-6wk42" to be "success or failure"
Jan 31 12:38:01.154: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.903788ms
Jan 31 12:38:03.172: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03083474s
Jan 31 12:38:05.193: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05226312s
Jan 31 12:38:07.893: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.75205601s
Jan 31 12:38:10.064: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.923221802s
Jan 31 12:38:13.260: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.118633554s
STEP: Saw pod success
Jan 31 12:38:13.260: INFO: Pod "pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:38:14.006: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 12:38:14.245: INFO: Waiting for pod pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:38:14.304: INFO: Pod pod-secrets-848a86a5-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:38:14.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6wk42" for this suite.
Jan 31 12:38:20.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:38:20.621: INFO: namespace: e2e-tests-secrets-6wk42, resource: bindings, ignored listing per whitelist
Jan 31 12:38:20.667: INFO: namespace e2e-tests-secrets-6wk42 deletion completed in 6.241996954s

• [SLOW TEST:19.661 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:38:20.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 31 12:38:45.056: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:45.056: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:45.142940       8 log.go:172] (0xc0029e4420) (0xc001fa7360) Create stream
I0131 12:38:45.143223       8 log.go:172] (0xc0029e4420) (0xc001fa7360) Stream added, broadcasting: 1
I0131 12:38:45.153355       8 log.go:172] (0xc0029e4420) Reply frame received for 1
I0131 12:38:45.153414       8 log.go:172] (0xc0029e4420) (0xc002418640) Create stream
I0131 12:38:45.153425       8 log.go:172] (0xc0029e4420) (0xc002418640) Stream added, broadcasting: 3
I0131 12:38:45.155520       8 log.go:172] (0xc0029e4420) Reply frame received for 3
I0131 12:38:45.155563       8 log.go:172] (0xc0029e4420) (0xc00185e640) Create stream
I0131 12:38:45.155579       8 log.go:172] (0xc0029e4420) (0xc00185e640) Stream added, broadcasting: 5
I0131 12:38:45.157404       8 log.go:172] (0xc0029e4420) Reply frame received for 5
I0131 12:38:45.285345       8 log.go:172] (0xc0029e4420) Data frame received for 3
I0131 12:38:45.285412       8 log.go:172] (0xc002418640) (3) Data frame handling
I0131 12:38:45.285435       8 log.go:172] (0xc002418640) (3) Data frame sent
I0131 12:38:45.430965       8 log.go:172] (0xc0029e4420) Data frame received for 1
I0131 12:38:45.431085       8 log.go:172] (0xc0029e4420) (0xc002418640) Stream removed, broadcasting: 3
I0131 12:38:45.431144       8 log.go:172] (0xc001fa7360) (1) Data frame handling
I0131 12:38:45.431174       8 log.go:172] (0xc001fa7360) (1) Data frame sent
I0131 12:38:45.431234       8 log.go:172] (0xc0029e4420) (0xc00185e640) Stream removed, broadcasting: 5
I0131 12:38:45.431277       8 log.go:172] (0xc0029e4420) (0xc001fa7360) Stream removed, broadcasting: 1
I0131 12:38:45.431295       8 log.go:172] (0xc0029e4420) Go away received
I0131 12:38:45.431761       8 log.go:172] (0xc0029e4420) (0xc001fa7360) Stream removed, broadcasting: 1
I0131 12:38:45.431787       8 log.go:172] (0xc0029e4420) (0xc002418640) Stream removed, broadcasting: 3
I0131 12:38:45.431830       8 log.go:172] (0xc0029e4420) (0xc00185e640) Stream removed, broadcasting: 5
Jan 31 12:38:45.431: INFO: Exec stderr: ""
Jan 31 12:38:45.431: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:45.432: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:45.528368       8 log.go:172] (0xc000d3c420) (0xc002418960) Create stream
I0131 12:38:45.528476       8 log.go:172] (0xc000d3c420) (0xc002418960) Stream added, broadcasting: 1
I0131 12:38:45.533096       8 log.go:172] (0xc000d3c420) Reply frame received for 1
I0131 12:38:45.533133       8 log.go:172] (0xc000d3c420) (0xc00216a140) Create stream
I0131 12:38:45.533145       8 log.go:172] (0xc000d3c420) (0xc00216a140) Stream added, broadcasting: 3
I0131 12:38:45.534067       8 log.go:172] (0xc000d3c420) Reply frame received for 3
I0131 12:38:45.534097       8 log.go:172] (0xc000d3c420) (0xc00216a1e0) Create stream
I0131 12:38:45.534108       8 log.go:172] (0xc000d3c420) (0xc00216a1e0) Stream added, broadcasting: 5
I0131 12:38:45.535064       8 log.go:172] (0xc000d3c420) Reply frame received for 5
I0131 12:38:45.741265       8 log.go:172] (0xc000d3c420) Data frame received for 3
I0131 12:38:45.741418       8 log.go:172] (0xc00216a140) (3) Data frame handling
I0131 12:38:45.741471       8 log.go:172] (0xc00216a140) (3) Data frame sent
I0131 12:38:45.979803       8 log.go:172] (0xc000d3c420) (0xc00216a140) Stream removed, broadcasting: 3
I0131 12:38:45.979937       8 log.go:172] (0xc000d3c420) Data frame received for 1
I0131 12:38:45.979958       8 log.go:172] (0xc000d3c420) (0xc00216a1e0) Stream removed, broadcasting: 5
I0131 12:38:45.979986       8 log.go:172] (0xc002418960) (1) Data frame handling
I0131 12:38:45.980007       8 log.go:172] (0xc002418960) (1) Data frame sent
I0131 12:38:45.980015       8 log.go:172] (0xc000d3c420) (0xc002418960) Stream removed, broadcasting: 1
I0131 12:38:45.980028       8 log.go:172] (0xc000d3c420) Go away received
I0131 12:38:45.980269       8 log.go:172] (0xc000d3c420) (0xc002418960) Stream removed, broadcasting: 1
I0131 12:38:45.980280       8 log.go:172] (0xc000d3c420) (0xc00216a140) Stream removed, broadcasting: 3
I0131 12:38:45.980287       8 log.go:172] (0xc000d3c420) (0xc00216a1e0) Stream removed, broadcasting: 5
Jan 31 12:38:45.980: INFO: Exec stderr: ""
Jan 31 12:38:45.980: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:45.980: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:46.072142       8 log.go:172] (0xc000d3c8f0) (0xc002418be0) Create stream
I0131 12:38:46.072328       8 log.go:172] (0xc000d3c8f0) (0xc002418be0) Stream added, broadcasting: 1
I0131 12:38:46.090086       8 log.go:172] (0xc000d3c8f0) Reply frame received for 1
I0131 12:38:46.090708       8 log.go:172] (0xc000d3c8f0) (0xc002540460) Create stream
I0131 12:38:46.090789       8 log.go:172] (0xc000d3c8f0) (0xc002540460) Stream added, broadcasting: 3
I0131 12:38:46.093034       8 log.go:172] (0xc000d3c8f0) Reply frame received for 3
I0131 12:38:46.093088       8 log.go:172] (0xc000d3c8f0) (0xc00185e6e0) Create stream
I0131 12:38:46.093108       8 log.go:172] (0xc000d3c8f0) (0xc00185e6e0) Stream added, broadcasting: 5
I0131 12:38:46.094389       8 log.go:172] (0xc000d3c8f0) Reply frame received for 5
I0131 12:38:46.209933       8 log.go:172] (0xc000d3c8f0) Data frame received for 3
I0131 12:38:46.210109       8 log.go:172] (0xc002540460) (3) Data frame handling
I0131 12:38:46.210162       8 log.go:172] (0xc002540460) (3) Data frame sent
I0131 12:38:46.354034       8 log.go:172] (0xc000d3c8f0) (0xc002540460) Stream removed, broadcasting: 3
I0131 12:38:46.354419       8 log.go:172] (0xc000d3c8f0) Data frame received for 1
I0131 12:38:46.354436       8 log.go:172] (0xc002418be0) (1) Data frame handling
I0131 12:38:46.354452       8 log.go:172] (0xc002418be0) (1) Data frame sent
I0131 12:38:46.354475       8 log.go:172] (0xc000d3c8f0) (0xc002418be0) Stream removed, broadcasting: 1
I0131 12:38:46.354882       8 log.go:172] (0xc000d3c8f0) (0xc00185e6e0) Stream removed, broadcasting: 5
I0131 12:38:46.354934       8 log.go:172] (0xc000d3c8f0) (0xc002418be0) Stream removed, broadcasting: 1
I0131 12:38:46.354942       8 log.go:172] (0xc000d3c8f0) (0xc002540460) Stream removed, broadcasting: 3
I0131 12:38:46.354949       8 log.go:172] (0xc000d3c8f0) (0xc00185e6e0) Stream removed, broadcasting: 5
I0131 12:38:46.355609       8 log.go:172] (0xc000d3c8f0) Go away received
Jan 31 12:38:46.355: INFO: Exec stderr: ""
Jan 31 12:38:46.356: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:46.356: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:46.485068       8 log.go:172] (0xc000d3cdc0) (0xc002418dc0) Create stream
I0131 12:38:46.485255       8 log.go:172] (0xc000d3cdc0) (0xc002418dc0) Stream added, broadcasting: 1
I0131 12:38:46.495281       8 log.go:172] (0xc000d3cdc0) Reply frame received for 1
I0131 12:38:46.495386       8 log.go:172] (0xc000d3cdc0) (0xc00185e820) Create stream
I0131 12:38:46.495408       8 log.go:172] (0xc000d3cdc0) (0xc00185e820) Stream added, broadcasting: 3
I0131 12:38:46.497179       8 log.go:172] (0xc000d3cdc0) Reply frame received for 3
I0131 12:38:46.497207       8 log.go:172] (0xc000d3cdc0) (0xc002418e60) Create stream
I0131 12:38:46.497215       8 log.go:172] (0xc000d3cdc0) (0xc002418e60) Stream added, broadcasting: 5
I0131 12:38:46.498541       8 log.go:172] (0xc000d3cdc0) Reply frame received for 5
I0131 12:38:46.725069       8 log.go:172] (0xc000d3cdc0) Data frame received for 3
I0131 12:38:46.725230       8 log.go:172] (0xc00185e820) (3) Data frame handling
I0131 12:38:46.725329       8 log.go:172] (0xc00185e820) (3) Data frame sent
I0131 12:38:46.933930       8 log.go:172] (0xc000d3cdc0) Data frame received for 1
I0131 12:38:46.934255       8 log.go:172] (0xc002418dc0) (1) Data frame handling
I0131 12:38:46.934312       8 log.go:172] (0xc002418dc0) (1) Data frame sent
I0131 12:38:46.935616       8 log.go:172] (0xc000d3cdc0) (0xc002418e60) Stream removed, broadcasting: 5
I0131 12:38:46.935995       8 log.go:172] (0xc000d3cdc0) (0xc00185e820) Stream removed, broadcasting: 3
I0131 12:38:46.936074       8 log.go:172] (0xc000d3cdc0) (0xc002418dc0) Stream removed, broadcasting: 1
I0131 12:38:46.936098       8 log.go:172] (0xc000d3cdc0) Go away received
I0131 12:38:46.936566       8 log.go:172] (0xc000d3cdc0) (0xc002418dc0) Stream removed, broadcasting: 1
I0131 12:38:46.936598       8 log.go:172] (0xc000d3cdc0) (0xc00185e820) Stream removed, broadcasting: 3
I0131 12:38:46.936802       8 log.go:172] (0xc000d3cdc0) (0xc002418e60) Stream removed, broadcasting: 5
Jan 31 12:38:46.936: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 31 12:38:46.937: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:46.937: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:47.055922       8 log.go:172] (0xc001b522c0) (0xc002540780) Create stream
I0131 12:38:47.056125       8 log.go:172] (0xc001b522c0) (0xc002540780) Stream added, broadcasting: 1
I0131 12:38:47.060571       8 log.go:172] (0xc001b522c0) Reply frame received for 1
I0131 12:38:47.060633       8 log.go:172] (0xc001b522c0) (0xc001b03360) Create stream
I0131 12:38:47.060641       8 log.go:172] (0xc001b522c0) (0xc001b03360) Stream added, broadcasting: 3
I0131 12:38:47.062059       8 log.go:172] (0xc001b522c0) Reply frame received for 3
I0131 12:38:47.062086       8 log.go:172] (0xc001b522c0) (0xc00185e8c0) Create stream
I0131 12:38:47.062094       8 log.go:172] (0xc001b522c0) (0xc00185e8c0) Stream added, broadcasting: 5
I0131 12:38:47.063111       8 log.go:172] (0xc001b522c0) Reply frame received for 5
I0131 12:38:47.186232       8 log.go:172] (0xc001b522c0) Data frame received for 3
I0131 12:38:47.186373       8 log.go:172] (0xc001b03360) (3) Data frame handling
I0131 12:38:47.186401       8 log.go:172] (0xc001b03360) (3) Data frame sent
I0131 12:38:47.331289       8 log.go:172] (0xc001b522c0) Data frame received for 1
I0131 12:38:47.331489       8 log.go:172] (0xc001b522c0) (0xc00185e8c0) Stream removed, broadcasting: 5
I0131 12:38:47.331554       8 log.go:172] (0xc002540780) (1) Data frame handling
I0131 12:38:47.331583       8 log.go:172] (0xc001b522c0) (0xc001b03360) Stream removed, broadcasting: 3
I0131 12:38:47.331606       8 log.go:172] (0xc002540780) (1) Data frame sent
I0131 12:38:47.331620       8 log.go:172] (0xc001b522c0) (0xc002540780) Stream removed, broadcasting: 1
I0131 12:38:47.331664       8 log.go:172] (0xc001b522c0) Go away received
I0131 12:38:47.331952       8 log.go:172] (0xc001b522c0) (0xc002540780) Stream removed, broadcasting: 1
I0131 12:38:47.331976       8 log.go:172] (0xc001b522c0) (0xc001b03360) Stream removed, broadcasting: 3
I0131 12:38:47.331990       8 log.go:172] (0xc001b522c0) (0xc00185e8c0) Stream removed, broadcasting: 5
Jan 31 12:38:47.332: INFO: Exec stderr: ""
Jan 31 12:38:47.332: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:47.332: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:47.423900       8 log.go:172] (0xc001b52790) (0xc002540a00) Create stream
I0131 12:38:47.424294       8 log.go:172] (0xc001b52790) (0xc002540a00) Stream added, broadcasting: 1
I0131 12:38:47.435263       8 log.go:172] (0xc001b52790) Reply frame received for 1
I0131 12:38:47.435366       8 log.go:172] (0xc001b52790) (0xc002540b40) Create stream
I0131 12:38:47.435388       8 log.go:172] (0xc001b52790) (0xc002540b40) Stream added, broadcasting: 3
I0131 12:38:47.438091       8 log.go:172] (0xc001b52790) Reply frame received for 3
I0131 12:38:47.438119       8 log.go:172] (0xc001b52790) (0xc002540be0) Create stream
I0131 12:38:47.438133       8 log.go:172] (0xc001b52790) (0xc002540be0) Stream added, broadcasting: 5
I0131 12:38:47.439638       8 log.go:172] (0xc001b52790) Reply frame received for 5
I0131 12:38:47.567111       8 log.go:172] (0xc001b52790) Data frame received for 3
I0131 12:38:47.567227       8 log.go:172] (0xc002540b40) (3) Data frame handling
I0131 12:38:47.567261       8 log.go:172] (0xc002540b40) (3) Data frame sent
I0131 12:38:47.678417       8 log.go:172] (0xc001b52790) Data frame received for 1
I0131 12:38:47.678524       8 log.go:172] (0xc001b52790) (0xc002540be0) Stream removed, broadcasting: 5
I0131 12:38:47.678655       8 log.go:172] (0xc002540a00) (1) Data frame handling
I0131 12:38:47.678733       8 log.go:172] (0xc002540a00) (1) Data frame sent
I0131 12:38:47.678852       8 log.go:172] (0xc001b52790) (0xc002540b40) Stream removed, broadcasting: 3
I0131 12:38:47.678923       8 log.go:172] (0xc001b52790) (0xc002540a00) Stream removed, broadcasting: 1
I0131 12:38:47.678971       8 log.go:172] (0xc001b52790) Go away received
I0131 12:38:47.679802       8 log.go:172] (0xc001b52790) (0xc002540a00) Stream removed, broadcasting: 1
I0131 12:38:47.679832       8 log.go:172] (0xc001b52790) (0xc002540b40) Stream removed, broadcasting: 3
I0131 12:38:47.679853       8 log.go:172] (0xc001b52790) (0xc002540be0) Stream removed, broadcasting: 5
Jan 31 12:38:47.679: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 31 12:38:47.680: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:47.680: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:47.767382       8 log.go:172] (0xc0029e48f0) (0xc001fa75e0) Create stream
I0131 12:38:47.767459       8 log.go:172] (0xc0029e48f0) (0xc001fa75e0) Stream added, broadcasting: 1
I0131 12:38:47.786955       8 log.go:172] (0xc0029e48f0) Reply frame received for 1
I0131 12:38:47.787127       8 log.go:172] (0xc0029e48f0) (0xc0016fa000) Create stream
I0131 12:38:47.787151       8 log.go:172] (0xc0029e48f0) (0xc0016fa000) Stream added, broadcasting: 3
I0131 12:38:47.789235       8 log.go:172] (0xc0029e48f0) Reply frame received for 3
I0131 12:38:47.789379       8 log.go:172] (0xc0029e48f0) (0xc001f58000) Create stream
I0131 12:38:47.789395       8 log.go:172] (0xc0029e48f0) (0xc001f58000) Stream added, broadcasting: 5
I0131 12:38:47.792343       8 log.go:172] (0xc0029e48f0) Reply frame received for 5
I0131 12:38:47.919950       8 log.go:172] (0xc0029e48f0) Data frame received for 3
I0131 12:38:47.920129       8 log.go:172] (0xc0016fa000) (3) Data frame handling
I0131 12:38:47.920180       8 log.go:172] (0xc0016fa000) (3) Data frame sent
I0131 12:38:48.034244       8 log.go:172] (0xc0029e48f0) Data frame received for 1
I0131 12:38:48.034392       8 log.go:172] (0xc001fa75e0) (1) Data frame handling
I0131 12:38:48.034439       8 log.go:172] (0xc001fa75e0) (1) Data frame sent
I0131 12:38:48.035365       8 log.go:172] (0xc0029e48f0) (0xc001f58000) Stream removed, broadcasting: 5
I0131 12:38:48.035810       8 log.go:172] (0xc0029e48f0) (0xc001fa75e0) Stream removed, broadcasting: 1
I0131 12:38:48.036325       8 log.go:172] (0xc0029e48f0) (0xc0016fa000) Stream removed, broadcasting: 3
I0131 12:38:48.036419       8 log.go:172] (0xc0029e48f0) Go away received
I0131 12:38:48.036530       8 log.go:172] (0xc0029e48f0) (0xc001fa75e0) Stream removed, broadcasting: 1
I0131 12:38:48.036557       8 log.go:172] (0xc0029e48f0) (0xc0016fa000) Stream removed, broadcasting: 3
I0131 12:38:48.036576       8 log.go:172] (0xc0029e48f0) (0xc001f58000) Stream removed, broadcasting: 5
Jan 31 12:38:48.036: INFO: Exec stderr: ""
Jan 31 12:38:48.037: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:48.037: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:48.129079       8 log.go:172] (0xc0009c8790) (0xc0019c20a0) Create stream
I0131 12:38:48.129219       8 log.go:172] (0xc0009c8790) (0xc0019c20a0) Stream added, broadcasting: 1
I0131 12:38:48.143289       8 log.go:172] (0xc0009c8790) Reply frame received for 1
I0131 12:38:48.143324       8 log.go:172] (0xc0009c8790) (0xc001f581e0) Create stream
I0131 12:38:48.143336       8 log.go:172] (0xc0009c8790) (0xc001f581e0) Stream added, broadcasting: 3
I0131 12:38:48.144504       8 log.go:172] (0xc0009c8790) Reply frame received for 3
I0131 12:38:48.144544       8 log.go:172] (0xc0009c8790) (0xc001c74140) Create stream
I0131 12:38:48.144558       8 log.go:172] (0xc0009c8790) (0xc001c74140) Stream added, broadcasting: 5
I0131 12:38:48.145692       8 log.go:172] (0xc0009c8790) Reply frame received for 5
I0131 12:38:48.346792       8 log.go:172] (0xc0009c8790) Data frame received for 3
I0131 12:38:48.346894       8 log.go:172] (0xc001f581e0) (3) Data frame handling
I0131 12:38:48.346913       8 log.go:172] (0xc001f581e0) (3) Data frame sent
I0131 12:38:48.536052       8 log.go:172] (0xc0009c8790) (0xc001f581e0) Stream removed, broadcasting: 3
I0131 12:38:48.536190       8 log.go:172] (0xc0009c8790) Data frame received for 1
I0131 12:38:48.536219       8 log.go:172] (0xc0009c8790) (0xc001c74140) Stream removed, broadcasting: 5
I0131 12:38:48.536274       8 log.go:172] (0xc0019c20a0) (1) Data frame handling
I0131 12:38:48.536299       8 log.go:172] (0xc0019c20a0) (1) Data frame sent
I0131 12:38:48.536309       8 log.go:172] (0xc0009c8790) (0xc0019c20a0) Stream removed, broadcasting: 1
I0131 12:38:48.536331       8 log.go:172] (0xc0009c8790) Go away received
I0131 12:38:48.536748       8 log.go:172] (0xc0009c8790) (0xc0019c20a0) Stream removed, broadcasting: 1
I0131 12:38:48.536794       8 log.go:172] (0xc0009c8790) (0xc001f581e0) Stream removed, broadcasting: 3
I0131 12:38:48.536814       8 log.go:172] (0xc0009c8790) (0xc001c74140) Stream removed, broadcasting: 5
Jan 31 12:38:48.536: INFO: Exec stderr: ""
Jan 31 12:38:48.537: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:48.537: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:48.650758       8 log.go:172] (0xc0009c8f20) (0xc0019c23c0) Create stream
I0131 12:38:48.650910       8 log.go:172] (0xc0009c8f20) (0xc0019c23c0) Stream added, broadcasting: 1
I0131 12:38:48.658616       8 log.go:172] (0xc0009c8f20) Reply frame received for 1
I0131 12:38:48.658753       8 log.go:172] (0xc0009c8f20) (0xc001a0a000) Create stream
I0131 12:38:48.658769       8 log.go:172] (0xc0009c8f20) (0xc001a0a000) Stream added, broadcasting: 3
I0131 12:38:48.661547       8 log.go:172] (0xc0009c8f20) Reply frame received for 3
I0131 12:38:48.661580       8 log.go:172] (0xc0009c8f20) (0xc001f58280) Create stream
I0131 12:38:48.661597       8 log.go:172] (0xc0009c8f20) (0xc001f58280) Stream added, broadcasting: 5
I0131 12:38:48.663175       8 log.go:172] (0xc0009c8f20) Reply frame received for 5
I0131 12:38:48.787875       8 log.go:172] (0xc0009c8f20) Data frame received for 3
I0131 12:38:48.787961       8 log.go:172] (0xc001a0a000) (3) Data frame handling
I0131 12:38:48.787980       8 log.go:172] (0xc001a0a000) (3) Data frame sent
I0131 12:38:48.936037       8 log.go:172] (0xc0009c8f20) Data frame received for 1
I0131 12:38:48.936248       8 log.go:172] (0xc0009c8f20) (0xc001a0a000) Stream removed, broadcasting: 3
I0131 12:38:48.936330       8 log.go:172] (0xc0019c23c0) (1) Data frame handling
I0131 12:38:48.936346       8 log.go:172] (0xc0019c23c0) (1) Data frame sent
I0131 12:38:48.936356       8 log.go:172] (0xc0009c8f20) (0xc0019c23c0) Stream removed, broadcasting: 1
I0131 12:38:48.936917       8 log.go:172] (0xc0009c8f20) (0xc001f58280) Stream removed, broadcasting: 5
I0131 12:38:48.937148       8 log.go:172] (0xc0009c8f20) Go away received
I0131 12:38:48.937298       8 log.go:172] (0xc0009c8f20) (0xc0019c23c0) Stream removed, broadcasting: 1
I0131 12:38:48.937333       8 log.go:172] (0xc0009c8f20) (0xc001a0a000) Stream removed, broadcasting: 3
I0131 12:38:48.937347       8 log.go:172] (0xc0009c8f20) (0xc001f58280) Stream removed, broadcasting: 5
Jan 31 12:38:48.937: INFO: Exec stderr: ""
Jan 31 12:38:48.937: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j9d2l PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:38:48.937: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:38:49.026697       8 log.go:172] (0xc0029e4420) (0xc001f58820) Create stream
I0131 12:38:49.027051       8 log.go:172] (0xc0029e4420) (0xc001f58820) Stream added, broadcasting: 1
I0131 12:38:49.035553       8 log.go:172] (0xc0029e4420) Reply frame received for 1
I0131 12:38:49.035598       8 log.go:172] (0xc0029e4420) (0xc001f588c0) Create stream
I0131 12:38:49.035608       8 log.go:172] (0xc0029e4420) (0xc001f588c0) Stream added, broadcasting: 3
I0131 12:38:49.037014       8 log.go:172] (0xc0029e4420) Reply frame received for 3
I0131 12:38:49.037052       8 log.go:172] (0xc0029e4420) (0xc001f58960) Create stream
I0131 12:38:49.037066       8 log.go:172] (0xc0029e4420) (0xc001f58960) Stream added, broadcasting: 5
I0131 12:38:49.038225       8 log.go:172] (0xc0029e4420) Reply frame received for 5
I0131 12:38:49.150839       8 log.go:172] (0xc0029e4420) Data frame received for 3
I0131 12:38:49.150914       8 log.go:172] (0xc001f588c0) (3) Data frame handling
I0131 12:38:49.150937       8 log.go:172] (0xc001f588c0) (3) Data frame sent
I0131 12:38:49.264895       8 log.go:172] (0xc0029e4420) Data frame received for 1
I0131 12:38:49.264980       8 log.go:172] (0xc0029e4420) (0xc001f588c0) Stream removed, broadcasting: 3
I0131 12:38:49.265026       8 log.go:172] (0xc001f58820) (1) Data frame handling
I0131 12:38:49.265047       8 log.go:172] (0xc001f58820) (1) Data frame sent
I0131 12:38:49.265141       8 log.go:172] (0xc0029e4420) (0xc001f58960) Stream removed, broadcasting: 5
I0131 12:38:49.265172       8 log.go:172] (0xc0029e4420) (0xc001f58820) Stream removed, broadcasting: 1
I0131 12:38:49.265188       8 log.go:172] (0xc0029e4420) Go away received
I0131 12:38:49.265450       8 log.go:172] (0xc0029e4420) (0xc001f58820) Stream removed, broadcasting: 1
I0131 12:38:49.265463       8 log.go:172] (0xc0029e4420) (0xc001f588c0) Stream removed, broadcasting: 3
I0131 12:38:49.265473       8 log.go:172] (0xc0029e4420) (0xc001f58960) Stream removed, broadcasting: 5
Jan 31 12:38:49.265: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:38:49.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-j9d2l" for this suite.
Jan 31 12:39:43.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:39:43.498: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-j9d2l, resource: bindings, ignored listing per whitelist
Jan 31 12:39:43.508: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-j9d2l deletion completed in 54.227716765s

• [SLOW TEST:82.841 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:39:43.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:39:43.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-g9mdl" to be "success or failure"
Jan 31 12:39:43.843: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.200173ms
Jan 31 12:39:46.588: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.845339512s
Jan 31 12:39:48.607: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.864291146s
Jan 31 12:39:50.639: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.896545417s
Jan 31 12:39:52.652: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909250144s
Jan 31 12:39:54.664: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.92151971s
Jan 31 12:39:56.689: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.946191766s
STEP: Saw pod success
Jan 31 12:39:56.689: INFO: Pod "downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:39:56.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:39:57.434: INFO: Waiting for pod downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005 to disappear
Jan 31 12:39:57.645: INFO: Pod downwardapi-volume-c1b0ab71-4426-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:39:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g9mdl" for this suite.
Jan 31 12:40:03.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:40:04.024: INFO: namespace: e2e-tests-projected-g9mdl, resource: bindings, ignored listing per whitelist
Jan 31 12:40:04.065: INFO: namespace e2e-tests-projected-g9mdl deletion completed in 6.40368715s

• [SLOW TEST:20.556 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:40:04.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-znf7r
Jan 31 12:40:12.322: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-znf7r
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 12:40:12.327: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:44:13.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-znf7r" for this suite.
Jan 31 12:44:19.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:44:19.371: INFO: namespace: e2e-tests-container-probe-znf7r, resource: bindings, ignored listing per whitelist
Jan 31 12:44:19.482: INFO: namespace e2e-tests-container-probe-znf7r deletion completed in 6.285660553s

• [SLOW TEST:255.417 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:44:19.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:44:19.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-jftdt" to be "success or failure"
Jan 31 12:44:19.795: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.151848ms
Jan 31 12:44:21.869: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13604587s
Jan 31 12:44:23.888: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155177256s
Jan 31 12:44:26.402: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668724985s
Jan 31 12:44:28.423: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690280426s
Jan 31 12:44:30.430: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.69714176s
STEP: Saw pod success
Jan 31 12:44:30.430: INFO: Pod "downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:44:30.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:44:31.504: INFO: Waiting for pod downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005 to disappear
Jan 31 12:44:31.771: INFO: Pod downwardapi-volume-66316de6-4427-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:44:31.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jftdt" for this suite.
Jan 31 12:44:37.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:44:38.049: INFO: namespace: e2e-tests-downward-api-jftdt, resource: bindings, ignored listing per whitelist
Jan 31 12:44:38.097: INFO: namespace e2e-tests-downward-api-jftdt deletion completed in 6.307343342s

• [SLOW TEST:18.614 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:44:38.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7150938b-4427-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-7150938b-4427-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:44:50.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-986zt" for this suite.
Jan 31 12:45:14.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:45:15.081: INFO: namespace: e2e-tests-configmap-986zt, resource: bindings, ignored listing per whitelist
Jan 31 12:45:15.124: INFO: namespace e2e-tests-configmap-986zt deletion completed in 24.230902765s

• [SLOW TEST:37.027 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:45:15.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 12:45:15.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-x4tvl'
Jan 31 12:45:17.214: INFO: stderr: ""
Jan 31 12:45:17.215: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 31 12:45:27.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-x4tvl -o json'
Jan 31 12:45:27.446: INFO: stderr: ""
Jan 31 12:45:27.446: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-31T12:45:17Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-x4tvl\",\n        \"resourceVersion\": \"20084289\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-x4tvl/pods/e2e-test-nginx-pod\",\n        \"uid\": \"886f05e4-4427-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-fqh9r\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-fqh9r\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-fqh9r\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T12:45:17Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T12:45:26Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T12:45:26Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-31T12:45:17Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://24c7f3eadba2bd3c6555f5d5fc92694c95c26caaf537bd8bcef92bf718502e10\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-31T12:45:25Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-31T12:45:17Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 31 12:45:27.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-x4tvl'
Jan 31 12:45:27.985: INFO: stderr: ""
Jan 31 12:45:27.986: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 31 12:45:27.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-x4tvl'
Jan 31 12:45:36.425: INFO: stderr: ""
Jan 31 12:45:36.425: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:45:36.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x4tvl" for this suite.
Jan 31 12:45:42.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:45:42.570: INFO: namespace: e2e-tests-kubectl-x4tvl, resource: bindings, ignored listing per whitelist
Jan 31 12:45:42.679: INFO: namespace e2e-tests-kubectl-x4tvl deletion completed in 6.240754433s

• [SLOW TEST:27.555 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:45:42.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-r7hdz
I0131 12:45:42.988234       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-r7hdz, replica count: 1
I0131 12:45:44.039200       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:45.039637       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:46.040023       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:47.040722       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:48.041216       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:49.041919       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:50.042764       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:51.043440       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:52.044433       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 12:45:53.045091       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 12:45:53.242: INFO: Created: latency-svc-swhgl
Jan 31 12:45:53.339: INFO: Got endpoints: latency-svc-swhgl [194.041647ms]
Jan 31 12:45:53.406: INFO: Created: latency-svc-z7tpf
Jan 31 12:45:53.494: INFO: Got endpoints: latency-svc-z7tpf [152.650843ms]
Jan 31 12:45:53.538: INFO: Created: latency-svc-jjvt7
Jan 31 12:45:53.551: INFO: Got endpoints: latency-svc-jjvt7 [211.395957ms]
Jan 31 12:45:53.815: INFO: Created: latency-svc-k9phj
Jan 31 12:45:53.879: INFO: Got endpoints: latency-svc-k9phj [538.123054ms]
Jan 31 12:45:54.189: INFO: Created: latency-svc-w29b8
Jan 31 12:45:54.450: INFO: Got endpoints: latency-svc-w29b8 [1.108377866s]
Jan 31 12:45:54.471: INFO: Created: latency-svc-rthxj
Jan 31 12:45:54.506: INFO: Got endpoints: latency-svc-rthxj [1.165858431s]
Jan 31 12:45:54.720: INFO: Created: latency-svc-d42tr
Jan 31 12:45:54.739: INFO: Got endpoints: latency-svc-d42tr [1.398526615s]
Jan 31 12:45:54.892: INFO: Created: latency-svc-nqk8p
Jan 31 12:45:54.906: INFO: Got endpoints: latency-svc-nqk8p [1.565889596s]
Jan 31 12:45:54.942: INFO: Created: latency-svc-jknwt
Jan 31 12:45:54.964: INFO: Got endpoints: latency-svc-jknwt [1.624161172s]
Jan 31 12:45:55.105: INFO: Created: latency-svc-4d99c
Jan 31 12:45:55.122: INFO: Got endpoints: latency-svc-4d99c [1.780444701s]
Jan 31 12:45:55.189: INFO: Created: latency-svc-kwtm9
Jan 31 12:45:55.421: INFO: Got endpoints: latency-svc-kwtm9 [2.080293464s]
Jan 31 12:45:55.455: INFO: Created: latency-svc-r8kbq
Jan 31 12:45:55.473: INFO: Got endpoints: latency-svc-r8kbq [2.132788989s]
Jan 31 12:45:55.527: INFO: Created: latency-svc-sbmf7
Jan 31 12:45:55.855: INFO: Got endpoints: latency-svc-sbmf7 [2.51523473s]
Jan 31 12:45:56.089: INFO: Created: latency-svc-8ckwk
Jan 31 12:45:56.111: INFO: Got endpoints: latency-svc-8ckwk [2.770101879s]
Jan 31 12:45:56.386: INFO: Created: latency-svc-nc24c
Jan 31 12:45:56.417: INFO: Got endpoints: latency-svc-nc24c [3.076821175s]
Jan 31 12:45:56.689: INFO: Created: latency-svc-9dhpt
Jan 31 12:45:56.697: INFO: Got endpoints: latency-svc-9dhpt [3.357505591s]
Jan 31 12:45:56.947: INFO: Created: latency-svc-fj2n5
Jan 31 12:45:57.124: INFO: Got endpoints: latency-svc-fj2n5 [3.62931587s]
Jan 31 12:45:57.209: INFO: Created: latency-svc-fh86f
Jan 31 12:45:57.410: INFO: Got endpoints: latency-svc-fh86f [3.858710194s]
Jan 31 12:45:57.495: INFO: Created: latency-svc-hjqzf
Jan 31 12:45:57.619: INFO: Got endpoints: latency-svc-hjqzf [3.739284636s]
Jan 31 12:45:57.680: INFO: Created: latency-svc-hp5fl
Jan 31 12:45:57.817: INFO: Got endpoints: latency-svc-hp5fl [3.36716565s]
Jan 31 12:45:57.847: INFO: Created: latency-svc-q2qzr
Jan 31 12:45:57.865: INFO: Got endpoints: latency-svc-q2qzr [3.358363207s]
Jan 31 12:45:57.919: INFO: Created: latency-svc-jn96w
Jan 31 12:45:58.020: INFO: Got endpoints: latency-svc-jn96w [3.280609521s]
Jan 31 12:45:58.048: INFO: Created: latency-svc-99vgq
Jan 31 12:45:58.082: INFO: Got endpoints: latency-svc-99vgq [3.176361984s]
Jan 31 12:45:58.183: INFO: Created: latency-svc-wxjj6
Jan 31 12:45:58.218: INFO: Got endpoints: latency-svc-wxjj6 [3.253665221s]
Jan 31 12:45:58.280: INFO: Created: latency-svc-xslql
Jan 31 12:45:58.421: INFO: Got endpoints: latency-svc-xslql [3.299335557s]
Jan 31 12:45:58.439: INFO: Created: latency-svc-ndm6s
Jan 31 12:45:58.636: INFO: Got endpoints: latency-svc-ndm6s [3.214253439s]
Jan 31 12:45:58.683: INFO: Created: latency-svc-dn6vb
Jan 31 12:45:58.683: INFO: Got endpoints: latency-svc-dn6vb [3.210559269s]
Jan 31 12:45:58.722: INFO: Created: latency-svc-wtpm7
Jan 31 12:45:58.824: INFO: Got endpoints: latency-svc-wtpm7 [2.968113019s]
Jan 31 12:45:58.878: INFO: Created: latency-svc-sfm4b
Jan 31 12:45:58.926: INFO: Got endpoints: latency-svc-sfm4b [2.815012022s]
Jan 31 12:45:59.106: INFO: Created: latency-svc-ht9lr
Jan 31 12:45:59.161: INFO: Got endpoints: latency-svc-ht9lr [2.744155762s]
Jan 31 12:45:59.294: INFO: Created: latency-svc-vk44g
Jan 31 12:45:59.305: INFO: Got endpoints: latency-svc-vk44g [2.607688831s]
Jan 31 12:45:59.479: INFO: Created: latency-svc-dxmnx
Jan 31 12:45:59.485: INFO: Got endpoints: latency-svc-dxmnx [2.361382659s]
Jan 31 12:45:59.527: INFO: Created: latency-svc-w22qc
Jan 31 12:45:59.535: INFO: Got endpoints: latency-svc-w22qc [2.124597373s]
Jan 31 12:45:59.689: INFO: Created: latency-svc-2rdlp
Jan 31 12:45:59.739: INFO: Got endpoints: latency-svc-2rdlp [2.119716395s]
Jan 31 12:45:59.915: INFO: Created: latency-svc-zz4mw
Jan 31 12:45:59.960: INFO: Created: latency-svc-n5lgz
Jan 31 12:45:59.960: INFO: Got endpoints: latency-svc-zz4mw [2.142828375s]
Jan 31 12:45:59.973: INFO: Got endpoints: latency-svc-n5lgz [2.107633571s]
Jan 31 12:46:00.185: INFO: Created: latency-svc-5jdbh
Jan 31 12:46:00.243: INFO: Got endpoints: latency-svc-5jdbh [2.221634227s]
Jan 31 12:46:00.391: INFO: Created: latency-svc-97674
Jan 31 12:46:00.405: INFO: Got endpoints: latency-svc-97674 [2.322722927s]
Jan 31 12:46:00.444: INFO: Created: latency-svc-mxk5t
Jan 31 12:46:00.471: INFO: Got endpoints: latency-svc-mxk5t [2.253403071s]
Jan 31 12:46:00.647: INFO: Created: latency-svc-bkd55
Jan 31 12:46:00.678: INFO: Got endpoints: latency-svc-bkd55 [2.256541527s]
Jan 31 12:46:00.871: INFO: Created: latency-svc-mwnr6
Jan 31 12:46:00.900: INFO: Got endpoints: latency-svc-mwnr6 [2.263537758s]
Jan 31 12:46:01.090: INFO: Created: latency-svc-fqwf8
Jan 31 12:46:01.119: INFO: Got endpoints: latency-svc-fqwf8 [2.435421907s]
Jan 31 12:46:01.456: INFO: Created: latency-svc-kgfgf
Jan 31 12:46:01.541: INFO: Got endpoints: latency-svc-kgfgf [2.715995388s]
Jan 31 12:46:01.574: INFO: Created: latency-svc-vndwm
Jan 31 12:46:01.645: INFO: Got endpoints: latency-svc-vndwm [2.718451255s]
Jan 31 12:46:01.736: INFO: Created: latency-svc-cqdrx
Jan 31 12:46:01.926: INFO: Got endpoints: latency-svc-cqdrx [2.764646353s]
Jan 31 12:46:01.960: INFO: Created: latency-svc-749m7
Jan 31 12:46:01.994: INFO: Got endpoints: latency-svc-749m7 [2.688248821s]
Jan 31 12:46:02.021: INFO: Created: latency-svc-qvxjp
Jan 31 12:46:02.271: INFO: Got endpoints: latency-svc-qvxjp [2.785195624s]
Jan 31 12:46:02.439: INFO: Created: latency-svc-s97m2
Jan 31 12:46:02.459: INFO: Got endpoints: latency-svc-s97m2 [2.923495073s]
Jan 31 12:46:02.779: INFO: Created: latency-svc-zdlt6
Jan 31 12:46:02.792: INFO: Got endpoints: latency-svc-zdlt6 [3.052496563s]
Jan 31 12:46:02.980: INFO: Created: latency-svc-bvn4n
Jan 31 12:46:03.072: INFO: Got endpoints: latency-svc-bvn4n [3.111990766s]
Jan 31 12:46:03.092: INFO: Created: latency-svc-4kjms
Jan 31 12:46:03.105: INFO: Got endpoints: latency-svc-4kjms [3.131647911s]
Jan 31 12:46:03.253: INFO: Created: latency-svc-djqqf
Jan 31 12:46:03.279: INFO: Got endpoints: latency-svc-djqqf [3.03640282s]
Jan 31 12:46:03.480: INFO: Created: latency-svc-t8twc
Jan 31 12:46:03.636: INFO: Got endpoints: latency-svc-t8twc [3.230352051s]
Jan 31 12:46:03.656: INFO: Created: latency-svc-r7wnb
Jan 31 12:46:03.681: INFO: Got endpoints: latency-svc-r7wnb [3.209353958s]
Jan 31 12:46:03.734: INFO: Created: latency-svc-n5lsz
Jan 31 12:46:03.890: INFO: Got endpoints: latency-svc-n5lsz [3.211466564s]
Jan 31 12:46:03.916: INFO: Created: latency-svc-c9thv
Jan 31 12:46:03.924: INFO: Got endpoints: latency-svc-c9thv [3.023902517s]
Jan 31 12:46:03.985: INFO: Created: latency-svc-rjdcs
Jan 31 12:46:04.257: INFO: Got endpoints: latency-svc-rjdcs [3.13834101s]
Jan 31 12:46:04.274: INFO: Created: latency-svc-cs92w
Jan 31 12:46:04.283: INFO: Got endpoints: latency-svc-cs92w [2.740665831s]
Jan 31 12:46:04.477: INFO: Created: latency-svc-pkh7q
Jan 31 12:46:04.517: INFO: Got endpoints: latency-svc-pkh7q [2.872152966s]
Jan 31 12:46:04.649: INFO: Created: latency-svc-t2ztq
Jan 31 12:46:04.660: INFO: Got endpoints: latency-svc-t2ztq [2.733215595s]
Jan 31 12:46:04.696: INFO: Created: latency-svc-5h7lr
Jan 31 12:46:04.704: INFO: Got endpoints: latency-svc-5h7lr [2.710327373s]
Jan 31 12:46:04.824: INFO: Created: latency-svc-psg54
Jan 31 12:46:04.866: INFO: Got endpoints: latency-svc-psg54 [2.594072312s]
Jan 31 12:46:04.904: INFO: Created: latency-svc-hfgw6
Jan 31 12:46:05.144: INFO: Got endpoints: latency-svc-hfgw6 [2.684911459s]
Jan 31 12:46:05.204: INFO: Created: latency-svc-nhtnc
Jan 31 12:46:05.231: INFO: Got endpoints: latency-svc-nhtnc [2.438649768s]
Jan 31 12:46:05.444: INFO: Created: latency-svc-xs68t
Jan 31 12:46:05.465: INFO: Got endpoints: latency-svc-xs68t [2.392106463s]
Jan 31 12:46:05.641: INFO: Created: latency-svc-sjfxv
Jan 31 12:46:05.666: INFO: Got endpoints: latency-svc-sjfxv [2.560583734s]
Jan 31 12:46:05.711: INFO: Created: latency-svc-l6ct2
Jan 31 12:46:05.901: INFO: Got endpoints: latency-svc-l6ct2 [2.621840613s]
Jan 31 12:46:05.971: INFO: Created: latency-svc-xshtb
Jan 31 12:46:06.143: INFO: Created: latency-svc-q5h8s
Jan 31 12:46:06.147: INFO: Got endpoints: latency-svc-xshtb [2.511010187s]
Jan 31 12:46:06.169: INFO: Got endpoints: latency-svc-q5h8s [2.487940888s]
Jan 31 12:46:06.444: INFO: Created: latency-svc-9qpv5
Jan 31 12:46:06.472: INFO: Got endpoints: latency-svc-9qpv5 [2.581915341s]
Jan 31 12:46:06.554: INFO: Created: latency-svc-n8d7g
Jan 31 12:46:06.641: INFO: Got endpoints: latency-svc-n8d7g [2.716665289s]
Jan 31 12:46:06.663: INFO: Created: latency-svc-prxzm
Jan 31 12:46:06.669: INFO: Got endpoints: latency-svc-prxzm [2.411328515s]
Jan 31 12:46:06.722: INFO: Created: latency-svc-zqjtf
Jan 31 12:46:06.884: INFO: Got endpoints: latency-svc-zqjtf [2.601496708s]
Jan 31 12:46:06.894: INFO: Created: latency-svc-qgdfg
Jan 31 12:46:06.935: INFO: Got endpoints: latency-svc-qgdfg [2.41727157s]
Jan 31 12:46:07.041: INFO: Created: latency-svc-z2lrw
Jan 31 12:46:07.058: INFO: Got endpoints: latency-svc-z2lrw [2.398182795s]
Jan 31 12:46:07.110: INFO: Created: latency-svc-mc6fl
Jan 31 12:46:07.130: INFO: Got endpoints: latency-svc-mc6fl [2.425885467s]
Jan 31 12:46:07.214: INFO: Created: latency-svc-6nmkr
Jan 31 12:46:07.243: INFO: Got endpoints: latency-svc-6nmkr [2.376046302s]
Jan 31 12:46:07.426: INFO: Created: latency-svc-l2cbw
Jan 31 12:46:07.440: INFO: Got endpoints: latency-svc-l2cbw [2.296083194s]
Jan 31 12:46:07.511: INFO: Created: latency-svc-59prm
Jan 31 12:46:07.769: INFO: Got endpoints: latency-svc-59prm [2.537797843s]
Jan 31 12:46:07.796: INFO: Created: latency-svc-lcmb9
Jan 31 12:46:07.830: INFO: Got endpoints: latency-svc-lcmb9 [2.364860223s]
Jan 31 12:46:08.019: INFO: Created: latency-svc-crpbg
Jan 31 12:46:08.050: INFO: Got endpoints: latency-svc-crpbg [2.384342956s]
Jan 31 12:46:08.190: INFO: Created: latency-svc-rgldj
Jan 31 12:46:08.213: INFO: Got endpoints: latency-svc-rgldj [2.310710112s]
Jan 31 12:46:08.251: INFO: Created: latency-svc-rmthg
Jan 31 12:46:08.266: INFO: Got endpoints: latency-svc-rmthg [2.118326625s]
Jan 31 12:46:08.402: INFO: Created: latency-svc-ggkvb
Jan 31 12:46:08.419: INFO: Got endpoints: latency-svc-ggkvb [2.249352022s]
Jan 31 12:46:08.459: INFO: Created: latency-svc-mtt2j
Jan 31 12:46:08.574: INFO: Got endpoints: latency-svc-mtt2j [2.100737345s]
Jan 31 12:46:08.670: INFO: Created: latency-svc-9rqkm
Jan 31 12:46:08.724: INFO: Got endpoints: latency-svc-9rqkm [2.081964673s]
Jan 31 12:46:08.784: INFO: Created: latency-svc-9mkp2
Jan 31 12:46:08.801: INFO: Got endpoints: latency-svc-9mkp2 [2.13218194s]
Jan 31 12:46:08.908: INFO: Created: latency-svc-jdsdj
Jan 31 12:46:08.940: INFO: Got endpoints: latency-svc-jdsdj [2.05522577s]
Jan 31 12:46:09.153: INFO: Created: latency-svc-hfn2s
Jan 31 12:46:09.384: INFO: Got endpoints: latency-svc-hfn2s [2.448306453s]
Jan 31 12:46:09.790: INFO: Created: latency-svc-mvpdg
Jan 31 12:46:09.987: INFO: Got endpoints: latency-svc-mvpdg [2.928850013s]
Jan 31 12:46:09.991: INFO: Created: latency-svc-4x9gd
Jan 31 12:46:10.072: INFO: Got endpoints: latency-svc-4x9gd [2.941924009s]
Jan 31 12:46:10.192: INFO: Created: latency-svc-7zcpg
Jan 31 12:46:10.244: INFO: Got endpoints: latency-svc-7zcpg [3.000486969s]
Jan 31 12:46:10.371: INFO: Created: latency-svc-9fmf7
Jan 31 12:46:10.385: INFO: Got endpoints: latency-svc-9fmf7 [2.944497565s]
Jan 31 12:46:10.435: INFO: Created: latency-svc-g97ch
Jan 31 12:46:10.448: INFO: Got endpoints: latency-svc-g97ch [2.678248489s]
Jan 31 12:46:10.630: INFO: Created: latency-svc-nv8s7
Jan 31 12:46:10.656: INFO: Got endpoints: latency-svc-nv8s7 [2.824867758s]
Jan 31 12:46:10.769: INFO: Created: latency-svc-2jhsh
Jan 31 12:46:10.833: INFO: Got endpoints: latency-svc-2jhsh [2.782273575s]
Jan 31 12:46:10.956: INFO: Created: latency-svc-ml259
Jan 31 12:46:10.959: INFO: Got endpoints: latency-svc-ml259 [2.745996982s]
Jan 31 12:46:11.013: INFO: Created: latency-svc-smzt7
Jan 31 12:46:11.031: INFO: Got endpoints: latency-svc-smzt7 [2.76511301s]
Jan 31 12:46:11.139: INFO: Created: latency-svc-h8xkx
Jan 31 12:46:11.156: INFO: Got endpoints: latency-svc-h8xkx [2.736659968s]
Jan 31 12:46:11.193: INFO: Created: latency-svc-4cccp
Jan 31 12:46:11.211: INFO: Got endpoints: latency-svc-4cccp [2.637364249s]
Jan 31 12:46:11.386: INFO: Created: latency-svc-kfkw2
Jan 31 12:46:11.416: INFO: Got endpoints: latency-svc-kfkw2 [2.691860104s]
Jan 31 12:46:11.548: INFO: Created: latency-svc-9fxdh
Jan 31 12:46:11.575: INFO: Got endpoints: latency-svc-9fxdh [2.773157413s]
Jan 31 12:46:11.722: INFO: Created: latency-svc-b9284
Jan 31 12:46:11.740: INFO: Got endpoints: latency-svc-b9284 [2.799171636s]
Jan 31 12:46:11.809: INFO: Created: latency-svc-8fdfd
Jan 31 12:46:12.007: INFO: Got endpoints: latency-svc-8fdfd [2.623357004s]
Jan 31 12:46:12.029: INFO: Created: latency-svc-pr72b
Jan 31 12:46:12.207: INFO: Got endpoints: latency-svc-pr72b [2.218976331s]
Jan 31 12:46:12.433: INFO: Created: latency-svc-kdxb7
Jan 31 12:46:12.464: INFO: Got endpoints: latency-svc-kdxb7 [2.391229747s]
Jan 31 12:46:12.674: INFO: Created: latency-svc-7tgj5
Jan 31 12:46:12.692: INFO: Got endpoints: latency-svc-7tgj5 [2.448044705s]
Jan 31 12:46:12.750: INFO: Created: latency-svc-5cms5
Jan 31 12:46:12.872: INFO: Got endpoints: latency-svc-5cms5 [2.486560951s]
Jan 31 12:46:12.916: INFO: Created: latency-svc-cgzjx
Jan 31 12:46:12.948: INFO: Got endpoints: latency-svc-cgzjx [2.500521459s]
Jan 31 12:46:13.113: INFO: Created: latency-svc-n6xh5
Jan 31 12:46:13.130: INFO: Got endpoints: latency-svc-n6xh5 [2.474124953s]
Jan 31 12:46:13.268: INFO: Created: latency-svc-gc4hw
Jan 31 12:46:13.274: INFO: Got endpoints: latency-svc-gc4hw [2.441209852s]
Jan 31 12:46:13.429: INFO: Created: latency-svc-c9gtz
Jan 31 12:46:13.448: INFO: Got endpoints: latency-svc-c9gtz [2.488070667s]
Jan 31 12:46:13.503: INFO: Created: latency-svc-g28hd
Jan 31 12:46:13.511: INFO: Got endpoints: latency-svc-g28hd [2.479522728s]
Jan 31 12:46:13.677: INFO: Created: latency-svc-j8glk
Jan 31 12:46:13.841: INFO: Got endpoints: latency-svc-j8glk [2.685632723s]
Jan 31 12:46:13.873: INFO: Created: latency-svc-jmnm9
Jan 31 12:46:13.893: INFO: Got endpoints: latency-svc-jmnm9 [2.681653877s]
Jan 31 12:46:14.018: INFO: Created: latency-svc-kft54
Jan 31 12:46:14.039: INFO: Got endpoints: latency-svc-kft54 [2.622771559s]
Jan 31 12:46:14.289: INFO: Created: latency-svc-7b87q
Jan 31 12:46:14.289: INFO: Got endpoints: latency-svc-7b87q [2.714091533s]
Jan 31 12:46:14.445: INFO: Created: latency-svc-2dw8c
Jan 31 12:46:14.463: INFO: Got endpoints: latency-svc-2dw8c [2.722943311s]
Jan 31 12:46:14.529: INFO: Created: latency-svc-ctzss
Jan 31 12:46:14.644: INFO: Got endpoints: latency-svc-ctzss [2.635832537s]
Jan 31 12:46:14.741: INFO: Created: latency-svc-279cd
Jan 31 12:46:14.855: INFO: Got endpoints: latency-svc-279cd [2.646713299s]
Jan 31 12:46:14.884: INFO: Created: latency-svc-xm7kt
Jan 31 12:46:14.889: INFO: Got endpoints: latency-svc-xm7kt [2.423894828s]
Jan 31 12:46:15.047: INFO: Created: latency-svc-hw686
Jan 31 12:46:15.062: INFO: Got endpoints: latency-svc-hw686 [2.369728029s]
Jan 31 12:46:15.123: INFO: Created: latency-svc-t9m7k
Jan 31 12:46:15.206: INFO: Got endpoints: latency-svc-t9m7k [2.334132899s]
Jan 31 12:46:15.254: INFO: Created: latency-svc-cl7h8
Jan 31 12:46:15.256: INFO: Got endpoints: latency-svc-cl7h8 [2.307206671s]
Jan 31 12:46:15.435: INFO: Created: latency-svc-lcrtq
Jan 31 12:46:15.447: INFO: Got endpoints: latency-svc-lcrtq [2.316306525s]
Jan 31 12:46:15.503: INFO: Created: latency-svc-pxvnq
Jan 31 12:46:15.602: INFO: Got endpoints: latency-svc-pxvnq [2.327512392s]
Jan 31 12:46:15.707: INFO: Created: latency-svc-cmnn5
Jan 31 12:46:15.847: INFO: Got endpoints: latency-svc-cmnn5 [2.399213604s]
Jan 31 12:46:15.938: INFO: Created: latency-svc-l9np9
Jan 31 12:46:16.033: INFO: Got endpoints: latency-svc-l9np9 [2.522085259s]
Jan 31 12:46:16.104: INFO: Created: latency-svc-ndtl4
Jan 31 12:46:16.195: INFO: Got endpoints: latency-svc-ndtl4 [2.353193317s]
Jan 31 12:46:16.441: INFO: Created: latency-svc-9ljb9
Jan 31 12:46:16.475: INFO: Got endpoints: latency-svc-9ljb9 [2.581380249s]
Jan 31 12:46:16.602: INFO: Created: latency-svc-n28mb
Jan 31 12:46:16.632: INFO: Got endpoints: latency-svc-n28mb [2.592063431s]
Jan 31 12:46:16.703: INFO: Created: latency-svc-z626j
Jan 31 12:46:16.886: INFO: Got endpoints: latency-svc-z626j [2.596715182s]
Jan 31 12:46:16.903: INFO: Created: latency-svc-gxwf5
Jan 31 12:46:16.925: INFO: Got endpoints: latency-svc-gxwf5 [2.461673746s]
Jan 31 12:46:17.071: INFO: Created: latency-svc-767rq
Jan 31 12:46:17.094: INFO: Got endpoints: latency-svc-767rq [2.449381816s]
Jan 31 12:46:17.215: INFO: Created: latency-svc-5jzfj
Jan 31 12:46:17.259: INFO: Got endpoints: latency-svc-5jzfj [2.40319108s]
Jan 31 12:46:17.436: INFO: Created: latency-svc-644bx
Jan 31 12:46:17.455: INFO: Got endpoints: latency-svc-644bx [2.566086264s]
Jan 31 12:46:17.511: INFO: Created: latency-svc-87w46
Jan 31 12:46:17.522: INFO: Got endpoints: latency-svc-87w46 [2.45955228s]
Jan 31 12:46:17.636: INFO: Created: latency-svc-tmb6w
Jan 31 12:46:17.827: INFO: Got endpoints: latency-svc-tmb6w [2.619877684s]
Jan 31 12:46:17.848: INFO: Created: latency-svc-g48rt
Jan 31 12:46:17.870: INFO: Got endpoints: latency-svc-g48rt [2.613200465s]
Jan 31 12:46:18.016: INFO: Created: latency-svc-tncfs
Jan 31 12:46:18.064: INFO: Got endpoints: latency-svc-tncfs [2.617566336s]
Jan 31 12:46:18.167: INFO: Created: latency-svc-5gx2h
Jan 31 12:46:18.180: INFO: Got endpoints: latency-svc-5gx2h [2.577028306s]
Jan 31 12:46:18.274: INFO: Created: latency-svc-n25cb
Jan 31 12:46:19.411: INFO: Got endpoints: latency-svc-n25cb [3.563490216s]
Jan 31 12:46:19.412: INFO: Created: latency-svc-mxbl6
Jan 31 12:46:19.452: INFO: Got endpoints: latency-svc-mxbl6 [3.417558884s]
Jan 31 12:46:19.590: INFO: Created: latency-svc-q2lt4
Jan 31 12:46:19.604: INFO: Got endpoints: latency-svc-q2lt4 [3.408869313s]
Jan 31 12:46:19.682: INFO: Created: latency-svc-5jlh2
Jan 31 12:46:19.798: INFO: Got endpoints: latency-svc-5jlh2 [3.323108323s]
Jan 31 12:46:19.860: INFO: Created: latency-svc-pfd2v
Jan 31 12:46:19.880: INFO: Got endpoints: latency-svc-pfd2v [3.247806298s]
Jan 31 12:46:20.072: INFO: Created: latency-svc-sd6jl
Jan 31 12:46:20.077: INFO: Got endpoints: latency-svc-sd6jl [3.19130838s]
Jan 31 12:46:20.219: INFO: Created: latency-svc-lrkd7
Jan 31 12:46:20.258: INFO: Got endpoints: latency-svc-lrkd7 [3.332882037s]
Jan 31 12:46:20.474: INFO: Created: latency-svc-7w67v
Jan 31 12:46:20.507: INFO: Got endpoints: latency-svc-7w67v [3.412606835s]
Jan 31 12:46:20.711: INFO: Created: latency-svc-5zznt
Jan 31 12:46:20.711: INFO: Got endpoints: latency-svc-5zznt [3.451841903s]
Jan 31 12:46:20.875: INFO: Created: latency-svc-2lrq5
Jan 31 12:46:20.911: INFO: Got endpoints: latency-svc-2lrq5 [3.455817998s]
Jan 31 12:46:20.962: INFO: Created: latency-svc-xxldc
Jan 31 12:46:21.076: INFO: Got endpoints: latency-svc-xxldc [3.55441037s]
Jan 31 12:46:21.122: INFO: Created: latency-svc-9nt45
Jan 31 12:46:21.127: INFO: Got endpoints: latency-svc-9nt45 [3.299934131s]
Jan 31 12:46:21.263: INFO: Created: latency-svc-2r2xw
Jan 31 12:46:21.311: INFO: Got endpoints: latency-svc-2r2xw [3.440842595s]
Jan 31 12:46:22.145: INFO: Created: latency-svc-n4lk2
Jan 31 12:46:22.184: INFO: Got endpoints: latency-svc-n4lk2 [4.118971947s]
Jan 31 12:46:22.472: INFO: Created: latency-svc-jxgt8
Jan 31 12:46:22.498: INFO: Got endpoints: latency-svc-jxgt8 [4.31857959s]
Jan 31 12:46:22.639: INFO: Created: latency-svc-s2l2j
Jan 31 12:46:22.667: INFO: Got endpoints: latency-svc-s2l2j [3.254753292s]
Jan 31 12:46:22.848: INFO: Created: latency-svc-f6kp8
Jan 31 12:46:22.910: INFO: Got endpoints: latency-svc-f6kp8 [3.458479803s]
Jan 31 12:46:22.911: INFO: Created: latency-svc-2p8pl
Jan 31 12:46:23.010: INFO: Got endpoints: latency-svc-2p8pl [3.405444519s]
Jan 31 12:46:23.063: INFO: Created: latency-svc-djwp7
Jan 31 12:46:23.071: INFO: Got endpoints: latency-svc-djwp7 [3.272068277s]
Jan 31 12:46:23.217: INFO: Created: latency-svc-rzrwp
Jan 31 12:46:23.233: INFO: Got endpoints: latency-svc-rzrwp [3.353010286s]
Jan 31 12:46:23.405: INFO: Created: latency-svc-lp4qq
Jan 31 12:46:23.411: INFO: Got endpoints: latency-svc-lp4qq [3.333103437s]
Jan 31 12:46:23.549: INFO: Created: latency-svc-ljvlz
Jan 31 12:46:23.562: INFO: Got endpoints: latency-svc-ljvlz [3.30415112s]
Jan 31 12:46:23.816: INFO: Created: latency-svc-cxbf5
Jan 31 12:46:23.979: INFO: Got endpoints: latency-svc-cxbf5 [3.471378962s]
Jan 31 12:46:24.003: INFO: Created: latency-svc-cmgn7
Jan 31 12:46:24.023: INFO: Got endpoints: latency-svc-cmgn7 [3.311380797s]
Jan 31 12:46:24.274: INFO: Created: latency-svc-bjq4n
Jan 31 12:46:24.302: INFO: Got endpoints: latency-svc-bjq4n [3.390585942s]
Jan 31 12:46:24.450: INFO: Created: latency-svc-lcwxs
Jan 31 12:46:24.455: INFO: Got endpoints: latency-svc-lcwxs [3.377696622s]
Jan 31 12:46:24.735: INFO: Created: latency-svc-5x2ms
Jan 31 12:46:24.742: INFO: Got endpoints: latency-svc-5x2ms [3.61442566s]
Jan 31 12:46:24.956: INFO: Created: latency-svc-jrnw7
Jan 31 12:46:24.976: INFO: Got endpoints: latency-svc-jrnw7 [3.664856455s]
Jan 31 12:46:25.125: INFO: Created: latency-svc-w84n5
Jan 31 12:46:25.139: INFO: Got endpoints: latency-svc-w84n5 [2.954838091s]
Jan 31 12:46:25.206: INFO: Created: latency-svc-794fs
Jan 31 12:46:25.289: INFO: Got endpoints: latency-svc-794fs [2.789997011s]
Jan 31 12:46:25.337: INFO: Created: latency-svc-t5flc
Jan 31 12:46:25.351: INFO: Got endpoints: latency-svc-t5flc [2.684084888s]
Jan 31 12:46:25.433: INFO: Created: latency-svc-nz9bc
Jan 31 12:46:25.450: INFO: Got endpoints: latency-svc-nz9bc [2.539546196s]
Jan 31 12:46:25.515: INFO: Created: latency-svc-889qd
Jan 31 12:46:25.601: INFO: Got endpoints: latency-svc-889qd [2.590545025s]
Jan 31 12:46:25.641: INFO: Created: latency-svc-4jm72
Jan 31 12:46:25.651: INFO: Got endpoints: latency-svc-4jm72 [2.579771548s]
Jan 31 12:46:25.832: INFO: Created: latency-svc-c9mv2
Jan 31 12:46:25.841: INFO: Got endpoints: latency-svc-c9mv2 [2.608108448s]
Jan 31 12:46:25.999: INFO: Created: latency-svc-8ppkw
Jan 31 12:46:26.001: INFO: Got endpoints: latency-svc-8ppkw [2.589939551s]
Jan 31 12:46:26.061: INFO: Created: latency-svc-f5pmg
Jan 31 12:46:26.180: INFO: Got endpoints: latency-svc-f5pmg [2.617341146s]
Jan 31 12:46:26.213: INFO: Created: latency-svc-5p4kg
Jan 31 12:46:26.232: INFO: Got endpoints: latency-svc-5p4kg [2.252822765s]
Jan 31 12:46:26.286: INFO: Created: latency-svc-9d9nw
Jan 31 12:46:26.482: INFO: Created: latency-svc-x95d5
Jan 31 12:46:26.482: INFO: Got endpoints: latency-svc-9d9nw [2.459359281s]
Jan 31 12:46:26.502: INFO: Got endpoints: latency-svc-x95d5 [2.200229385s]
Jan 31 12:46:26.654: INFO: Created: latency-svc-5hxf5
Jan 31 12:46:26.689: INFO: Got endpoints: latency-svc-5hxf5 [2.234313923s]
Jan 31 12:46:26.871: INFO: Created: latency-svc-swbm6
Jan 31 12:46:27.063: INFO: Got endpoints: latency-svc-swbm6 [2.321281786s]
Jan 31 12:46:27.170: INFO: Created: latency-svc-89ldr
Jan 31 12:46:27.363: INFO: Got endpoints: latency-svc-89ldr [2.386549625s]
Jan 31 12:46:27.462: INFO: Created: latency-svc-b8mjc
Jan 31 12:46:27.645: INFO: Got endpoints: latency-svc-b8mjc [2.505354899s]
Jan 31 12:46:27.917: INFO: Created: latency-svc-nsdd4
Jan 31 12:46:27.962: INFO: Got endpoints: latency-svc-nsdd4 [2.673138864s]
Jan 31 12:46:28.158: INFO: Created: latency-svc-7xt7h
Jan 31 12:46:28.182: INFO: Got endpoints: latency-svc-7xt7h [2.830982658s]
Jan 31 12:46:28.475: INFO: Created: latency-svc-6k99p
Jan 31 12:46:28.528: INFO: Got endpoints: latency-svc-6k99p [3.077554408s]
Jan 31 12:46:28.670: INFO: Created: latency-svc-78hj9
Jan 31 12:46:28.693: INFO: Got endpoints: latency-svc-78hj9 [3.091980384s]
Jan 31 12:46:28.848: INFO: Created: latency-svc-26mdb
Jan 31 12:46:28.862: INFO: Got endpoints: latency-svc-26mdb [3.210392261s]
Jan 31 12:46:28.924: INFO: Created: latency-svc-dlb4h
Jan 31 12:46:28.999: INFO: Got endpoints: latency-svc-dlb4h [3.157138895s]
Jan 31 12:46:29.036: INFO: Created: latency-svc-6kd9l
Jan 31 12:46:29.078: INFO: Got endpoints: latency-svc-6kd9l [3.076839115s]
Jan 31 12:46:29.170: INFO: Created: latency-svc-h5q6j
Jan 31 12:46:29.190: INFO: Got endpoints: latency-svc-h5q6j [3.009797181s]
Jan 31 12:46:29.226: INFO: Created: latency-svc-v2dmp
Jan 31 12:46:29.252: INFO: Got endpoints: latency-svc-v2dmp [3.02020792s]
Jan 31 12:46:29.434: INFO: Created: latency-svc-j6hgb
Jan 31 12:46:29.453: INFO: Got endpoints: latency-svc-j6hgb [2.969748034s]
Jan 31 12:46:29.613: INFO: Created: latency-svc-z8rf9
Jan 31 12:46:29.613: INFO: Got endpoints: latency-svc-z8rf9 [3.110428322s]
Jan 31 12:46:29.614: INFO: Created: latency-svc-jlv8l
Jan 31 12:46:29.614: INFO: Got endpoints: latency-svc-jlv8l [2.924556289s]
Jan 31 12:46:29.654: INFO: Created: latency-svc-n5krx
Jan 31 12:46:29.850: INFO: Got endpoints: latency-svc-n5krx [2.786479258s]
Jan 31 12:46:30.205: INFO: Created: latency-svc-8mt7p
Jan 31 12:46:30.405: INFO: Got endpoints: latency-svc-8mt7p [3.041686974s]
Jan 31 12:46:30.477: INFO: Created: latency-svc-xzjws
Jan 31 12:46:30.923: INFO: Got endpoints: latency-svc-xzjws [3.277530694s]
Jan 31 12:46:31.032: INFO: Created: latency-svc-bxczs
Jan 31 12:46:31.037: INFO: Got endpoints: latency-svc-bxczs [3.074193233s]
Jan 31 12:46:31.037: INFO: Latencies: [152.650843ms 211.395957ms 538.123054ms 1.108377866s 1.165858431s 1.398526615s 1.565889596s 1.624161172s 1.780444701s 2.05522577s 2.080293464s 2.081964673s 2.100737345s 2.107633571s 2.118326625s 2.119716395s 2.124597373s 2.13218194s 2.132788989s 2.142828375s 2.200229385s 2.218976331s 2.221634227s 2.234313923s 2.249352022s 2.252822765s 2.253403071s 2.256541527s 2.263537758s 2.296083194s 2.307206671s 2.310710112s 2.316306525s 2.321281786s 2.322722927s 2.327512392s 2.334132899s 2.353193317s 2.361382659s 2.364860223s 2.369728029s 2.376046302s 2.384342956s 2.386549625s 2.391229747s 2.392106463s 2.398182795s 2.399213604s 2.40319108s 2.411328515s 2.41727157s 2.423894828s 2.425885467s 2.435421907s 2.438649768s 2.441209852s 2.448044705s 2.448306453s 2.449381816s 2.459359281s 2.45955228s 2.461673746s 2.474124953s 2.479522728s 2.486560951s 2.487940888s 2.488070667s 2.500521459s 2.505354899s 2.511010187s 2.51523473s 2.522085259s 2.537797843s 2.539546196s 2.560583734s 2.566086264s 2.577028306s 2.579771548s 2.581380249s 2.581915341s 2.589939551s 2.590545025s 2.592063431s 2.594072312s 2.596715182s 2.601496708s 2.607688831s 2.608108448s 2.613200465s 2.617341146s 2.617566336s 2.619877684s 2.621840613s 2.622771559s 2.623357004s 2.635832537s 2.637364249s 2.646713299s 2.673138864s 2.678248489s 2.681653877s 2.684084888s 2.684911459s 2.685632723s 2.688248821s 2.691860104s 2.710327373s 2.714091533s 2.715995388s 2.716665289s 2.718451255s 2.722943311s 2.733215595s 2.736659968s 2.740665831s 2.744155762s 2.745996982s 2.764646353s 2.76511301s 2.770101879s 2.773157413s 2.782273575s 2.785195624s 2.786479258s 2.789997011s 2.799171636s 2.815012022s 2.824867758s 2.830982658s 2.872152966s 2.923495073s 2.924556289s 2.928850013s 2.941924009s 2.944497565s 2.954838091s 2.968113019s 2.969748034s 3.000486969s 3.009797181s 3.02020792s 3.023902517s 3.03640282s 3.041686974s 3.052496563s 3.074193233s 3.076821175s 3.076839115s 3.077554408s 3.091980384s 3.110428322s 3.111990766s 3.131647911s 3.13834101s 3.157138895s 3.176361984s 3.19130838s 3.209353958s 3.210392261s 3.210559269s 3.211466564s 3.214253439s 3.230352051s 3.247806298s 3.253665221s 3.254753292s 3.272068277s 3.277530694s 3.280609521s 3.299335557s 3.299934131s 3.30415112s 3.311380797s 3.323108323s 3.332882037s 3.333103437s 3.353010286s 3.357505591s 3.358363207s 3.36716565s 3.377696622s 3.390585942s 3.405444519s 3.408869313s 3.412606835s 3.417558884s 3.440842595s 3.451841903s 3.455817998s 3.458479803s 3.471378962s 3.55441037s 3.563490216s 3.61442566s 3.62931587s 3.664856455s 3.739284636s 3.858710194s 4.118971947s 4.31857959s]
Jan 31 12:46:31.037: INFO: 50 %ile: 2.681653877s
Jan 31 12:46:31.037: INFO: 90 %ile: 3.377696622s
Jan 31 12:46:31.037: INFO: 99 %ile: 4.118971947s
Jan 31 12:46:31.037: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:46:31.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-r7hdz" for this suite.
Jan 31 12:47:27.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:47:27.206: INFO: namespace: e2e-tests-svc-latency-r7hdz, resource: bindings, ignored listing per whitelist
Jan 31 12:47:27.243: INFO: namespace e2e-tests-svc-latency-r7hdz deletion completed in 56.196892794s

• [SLOW TEST:104.564 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:47:27.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 31 12:47:27.636: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:47:44.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fgkp7" for this suite.
Jan 31 12:47:52.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:47:52.799: INFO: namespace: e2e-tests-init-container-fgkp7, resource: bindings, ignored listing per whitelist
Jan 31 12:47:52.943: INFO: namespace e2e-tests-init-container-fgkp7 deletion completed in 8.282630569s

• [SLOW TEST:25.699 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:47:52.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 12:47:53.224: INFO: Waiting up to 5m0s for pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-smzcd" to be "success or failure"
Jan 31 12:47:53.242: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.694657ms
Jan 31 12:47:55.255: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030209894s
Jan 31 12:47:57.275: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050482627s
Jan 31 12:47:59.576: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352063886s
Jan 31 12:48:01.606: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381235934s
Jan 31 12:48:03.623: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398479636s
STEP: Saw pod success
Jan 31 12:48:03.623: INFO: Pod "pod-e56a07a7-4427-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:48:03.634: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e56a07a7-4427-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:48:03.843: INFO: Waiting for pod pod-e56a07a7-4427-11ea-aae6-0242ac110005 to disappear
Jan 31 12:48:03.875: INFO: Pod pod-e56a07a7-4427-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:48:03.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-smzcd" for this suite.
Jan 31 12:48:10.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:48:10.160: INFO: namespace: e2e-tests-emptydir-smzcd, resource: bindings, ignored listing per whitelist
Jan 31 12:48:10.297: INFO: namespace e2e-tests-emptydir-smzcd deletion completed in 6.410336079s

• [SLOW TEST:17.354 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:48:10.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:48:10.894: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"efe57692-4427-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001cb45b2), BlockOwnerDeletion:(*bool)(0xc001cb45b3)}}
Jan 31 12:48:11.051: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"efcaaddf-4427-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002559cc2), BlockOwnerDeletion:(*bool)(0xc002559cc3)}}
Jan 31 12:48:11.096: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"efde45ad-4427-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001cb4822), BlockOwnerDeletion:(*bool)(0xc001cb4823)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:48:16.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-998nd" for this suite.
Jan 31 12:48:22.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:48:22.363: INFO: namespace: e2e-tests-gc-998nd, resource: bindings, ignored listing per whitelist
Jan 31 12:48:22.466: INFO: namespace e2e-tests-gc-998nd deletion completed in 6.185152777s

• [SLOW TEST:12.169 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:48:22.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f72df175-4427-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:48:23.308: INFO: Waiting up to 5m0s for pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-l6wrs" to be "success or failure"
Jan 31 12:48:23.323: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.800714ms
Jan 31 12:48:25.340: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031616632s
Jan 31 12:48:27.355: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046142532s
Jan 31 12:48:29.432: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123322082s
Jan 31 12:48:32.021: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712589985s
Jan 31 12:48:34.029: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.720206076s
STEP: Saw pod success
Jan 31 12:48:34.029: INFO: Pod "pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:48:34.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 31 12:48:34.511: INFO: Waiting for pod pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005 to disappear
Jan 31 12:48:34.656: INFO: Pod pod-secrets-f75fbc92-4427-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:48:34.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-l6wrs" for this suite.
Jan 31 12:48:42.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:48:42.912: INFO: namespace: e2e-tests-secrets-l6wrs, resource: bindings, ignored listing per whitelist
Jan 31 12:48:42.928: INFO: namespace e2e-tests-secrets-l6wrs deletion completed in 8.225005537s
STEP: Destroying namespace "e2e-tests-secret-namespace-9d8hk" for this suite.
Jan 31 12:48:48.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:48:49.119: INFO: namespace: e2e-tests-secret-namespace-9d8hk, resource: bindings, ignored listing per whitelist
Jan 31 12:48:49.138: INFO: namespace e2e-tests-secret-namespace-9d8hk deletion completed in 6.210176927s

• [SLOW TEST:26.672 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:48:49.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:48:59.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-pchbt" for this suite.
Jan 31 12:49:43.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:49:43.645: INFO: namespace: e2e-tests-kubelet-test-pchbt, resource: bindings, ignored listing per whitelist
Jan 31 12:49:43.656: INFO: namespace e2e-tests-kubelet-test-pchbt deletion completed in 44.224935247s

• [SLOW TEST:54.517 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:49:43.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 31 12:49:54.727: INFO: Successfully updated pod "annotationupdate2773f241-4428-11ea-aae6-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:49:56.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fps9j" for this suite.
Jan 31 12:50:20.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:50:21.070: INFO: namespace: e2e-tests-downward-api-fps9j, resource: bindings, ignored listing per whitelist
Jan 31 12:50:21.075: INFO: namespace e2e-tests-downward-api-fps9j deletion completed in 24.267063682s

• [SLOW TEST:37.418 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:50:21.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:50:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-md2rm" for this suite.
Jan 31 12:50:39.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:50:39.772: INFO: namespace: e2e-tests-kubelet-test-md2rm, resource: bindings, ignored listing per whitelist
Jan 31 12:50:39.922: INFO: namespace e2e-tests-kubelet-test-md2rm deletion completed in 6.312697294s

• [SLOW TEST:18.848 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:50:39.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fn5mx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 12:50:40.209: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 12:51:14.409: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-fn5mx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 12:51:14.410: INFO: >>> kubeConfig: /root/.kube/config
I0131 12:51:14.527023       8 log.go:172] (0xc00048c840) (0xc001f58780) Create stream
I0131 12:51:14.527554       8 log.go:172] (0xc00048c840) (0xc001f58780) Stream added, broadcasting: 1
I0131 12:51:14.538716       8 log.go:172] (0xc00048c840) Reply frame received for 1
I0131 12:51:14.538834       8 log.go:172] (0xc00048c840) (0xc001fa66e0) Create stream
I0131 12:51:14.538868       8 log.go:172] (0xc00048c840) (0xc001fa66e0) Stream added, broadcasting: 3
I0131 12:51:14.541824       8 log.go:172] (0xc00048c840) Reply frame received for 3
I0131 12:51:14.541861       8 log.go:172] (0xc00048c840) (0xc001f58820) Create stream
I0131 12:51:14.541871       8 log.go:172] (0xc00048c840) (0xc001f58820) Stream added, broadcasting: 5
I0131 12:51:14.543510       8 log.go:172] (0xc00048c840) Reply frame received for 5
I0131 12:51:14.876408       8 log.go:172] (0xc00048c840) Data frame received for 3
I0131 12:51:14.876497       8 log.go:172] (0xc001fa66e0) (3) Data frame handling
I0131 12:51:14.876514       8 log.go:172] (0xc001fa66e0) (3) Data frame sent
I0131 12:51:14.998061       8 log.go:172] (0xc00048c840) Data frame received for 1
I0131 12:51:14.998169       8 log.go:172] (0xc001f58780) (1) Data frame handling
I0131 12:51:14.998192       8 log.go:172] (0xc001f58780) (1) Data frame sent
I0131 12:51:14.998218       8 log.go:172] (0xc00048c840) (0xc001f58780) Stream removed, broadcasting: 1
I0131 12:51:14.998673       8 log.go:172] (0xc00048c840) (0xc001fa66e0) Stream removed, broadcasting: 3
I0131 12:51:14.999173       8 log.go:172] (0xc00048c840) (0xc001f58820) Stream removed, broadcasting: 5
I0131 12:51:14.999206       8 log.go:172] (0xc00048c840) (0xc001f58780) Stream removed, broadcasting: 1
I0131 12:51:14.999216       8 log.go:172] (0xc00048c840) (0xc001fa66e0) Stream removed, broadcasting: 3
I0131 12:51:14.999227       8 log.go:172] (0xc00048c840) (0xc001f58820) Stream removed, broadcasting: 5
Jan 31 12:51:14.999: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:51:14.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-fn5mx" for this suite.
Jan 31 12:51:39.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:51:39.191: INFO: namespace: e2e-tests-pod-network-test-fn5mx, resource: bindings, ignored listing per whitelist
Jan 31 12:51:39.270: INFO: namespace e2e-tests-pod-network-test-fn5mx deletion completed in 24.240544069s

• [SLOW TEST:59.346 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:51:39.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-7zngw/secret-test-6c55bd89-4428-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 12:51:39.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005" in namespace "e2e-tests-secrets-7zngw" to be "success or failure"
Jan 31 12:51:39.541: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.67559ms
Jan 31 12:51:41.565: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034021373s
Jan 31 12:51:43.580: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048514491s
Jan 31 12:51:45.777: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245949477s
Jan 31 12:51:47.786: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254935453s
Jan 31 12:51:49.800: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.268666317s
STEP: Saw pod success
Jan 31 12:51:49.800: INFO: Pod "pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:51:49.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005 container env-test: 
STEP: delete the pod
Jan 31 12:51:49.971: INFO: Waiting for pod pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005 to disappear
Jan 31 12:51:49.984: INFO: Pod pod-configmaps-6c567256-4428-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:51:49.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7zngw" for this suite.
Jan 31 12:51:56.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:51:56.268: INFO: namespace: e2e-tests-secrets-7zngw, resource: bindings, ignored listing per whitelist
Jan 31 12:51:56.289: INFO: namespace e2e-tests-secrets-7zngw deletion completed in 6.297050209s

• [SLOW TEST:17.018 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:51:56.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:51:56.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-x75mq" to be "success or failure"
Jan 31 12:51:56.627: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.010386ms
Jan 31 12:51:58.808: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202580309s
Jan 31 12:52:00.840: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233865426s
Jan 31 12:52:02.910: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303971263s
Jan 31 12:52:04.924: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318676887s
Jan 31 12:52:07.144: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.538437762s
STEP: Saw pod success
Jan 31 12:52:07.145: INFO: Pod "downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:52:07.154: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:52:08.032: INFO: Waiting for pod downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005 to disappear
Jan 31 12:52:08.041: INFO: Pod downwardapi-volume-767905c1-4428-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:52:08.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x75mq" for this suite.
Jan 31 12:52:14.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:52:14.202: INFO: namespace: e2e-tests-downward-api-x75mq, resource: bindings, ignored listing per whitelist
Jan 31 12:52:14.249: INFO: namespace e2e-tests-downward-api-x75mq deletion completed in 6.196226025s

• [SLOW TEST:17.959 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:52:14.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 31 12:52:14.441: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix954651637/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:52:14.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zcdfp" for this suite.
Jan 31 12:52:20.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:52:20.692: INFO: namespace: e2e-tests-kubectl-zcdfp, resource: bindings, ignored listing per whitelist
Jan 31 12:52:20.811: INFO: namespace e2e-tests-kubectl-zcdfp deletion completed in 6.205457135s

• [SLOW TEST:6.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:52:20.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 12:52:20.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4mglp'
Jan 31 12:52:21.114: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 12:52:21.115: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 31 12:52:21.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-4mglp'
Jan 31 12:52:21.313: INFO: stderr: ""
Jan 31 12:52:21.314: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:52:21.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4mglp" for this suite.
Jan 31 12:52:45.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:52:45.510: INFO: namespace: e2e-tests-kubectl-4mglp, resource: bindings, ignored listing per whitelist
Jan 31 12:52:45.704: INFO: namespace e2e-tests-kubectl-4mglp deletion completed in 24.380080732s

• [SLOW TEST:24.892 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:52:45.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-9412c399-4428-11ea-aae6-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-9412c4b5-4428-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9412c399-4428-11ea-aae6-0242ac110005
STEP: Updating configmap cm-test-opt-upd-9412c4b5-4428-11ea-aae6-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-9412c506-4428-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:53:02.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-47gfq" for this suite.
Jan 31 12:53:28.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:53:29.053: INFO: namespace: e2e-tests-configmap-47gfq, resource: bindings, ignored listing per whitelist
Jan 31 12:53:29.101: INFO: namespace e2e-tests-configmap-47gfq deletion completed in 26.223568935s

• [SLOW TEST:43.397 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:53:29.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-adbaf4a9-4428-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 12:53:29.247: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-4mlsk" to be "success or failure"
Jan 31 12:53:29.308: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.798023ms
Jan 31 12:53:31.323: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075198993s
Jan 31 12:53:33.336: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088787765s
Jan 31 12:53:35.708: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460675503s
Jan 31 12:53:38.173: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.925457357s
Jan 31 12:53:40.192: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.944060303s
STEP: Saw pod success
Jan 31 12:53:40.192: INFO: Pod "pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:53:40.202: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 12:53:41.722: INFO: Waiting for pod pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005 to disappear
Jan 31 12:53:41.750: INFO: Pod pod-projected-configmaps-adbbcb72-4428-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:53:41.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4mlsk" for this suite.
Jan 31 12:53:47.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:53:47.919: INFO: namespace: e2e-tests-projected-4mlsk, resource: bindings, ignored listing per whitelist
Jan 31 12:53:48.036: INFO: namespace e2e-tests-projected-4mlsk deletion completed in 6.269364163s

• [SLOW TEST:18.935 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:53:48.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-mkh2
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 12:53:48.389: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mkh2" in namespace "e2e-tests-subpath-qz8qv" to be "success or failure"
Jan 31 12:53:48.411: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.143968ms
Jan 31 12:53:50.422: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032569299s
Jan 31 12:53:52.477: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088235088s
Jan 31 12:53:54.713: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323891846s
Jan 31 12:53:56.737: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347730746s
Jan 31 12:53:58.753: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364368737s
Jan 31 12:54:00.763: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.374303125s
Jan 31 12:54:02.786: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.397302643s
Jan 31 12:54:04.814: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 16.425507443s
Jan 31 12:54:06.838: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 18.448685992s
Jan 31 12:54:08.864: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 20.475211725s
Jan 31 12:54:10.886: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 22.497187923s
Jan 31 12:54:12.937: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 24.547968548s
Jan 31 12:54:14.956: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 26.567455428s
Jan 31 12:54:16.984: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 28.595226787s
Jan 31 12:54:19.003: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 30.614213219s
Jan 31 12:54:21.032: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 32.64338593s
Jan 31 12:54:23.749: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Running", Reason="", readiness=false. Elapsed: 35.360133204s
Jan 31 12:54:25.763: INFO: Pod "pod-subpath-test-configmap-mkh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.373702684s
STEP: Saw pod success
Jan 31 12:54:25.763: INFO: Pod "pod-subpath-test-configmap-mkh2" satisfied condition "success or failure"
Jan 31 12:54:25.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-mkh2 container test-container-subpath-configmap-mkh2: 
STEP: delete the pod
Jan 31 12:54:26.600: INFO: Waiting for pod pod-subpath-test-configmap-mkh2 to disappear
Jan 31 12:54:26.834: INFO: Pod pod-subpath-test-configmap-mkh2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mkh2
Jan 31 12:54:26.834: INFO: Deleting pod "pod-subpath-test-configmap-mkh2" in namespace "e2e-tests-subpath-qz8qv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:54:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-qz8qv" for this suite.
Jan 31 12:54:35.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:54:35.268: INFO: namespace: e2e-tests-subpath-qz8qv, resource: bindings, ignored listing per whitelist
Jan 31 12:54:35.270: INFO: namespace e2e-tests-subpath-qz8qv deletion completed in 8.349838218s

• [SLOW TEST:47.234 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:54:35.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:54:35.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-ns4ts" to be "success or failure"
Jan 31 12:54:35.736: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 130.835015ms
Jan 31 12:54:37.809: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204290368s
Jan 31 12:54:40.364: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.759165762s
Jan 31 12:54:42.394: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788905603s
Jan 31 12:54:47.522: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.916766923s
Jan 31 12:54:50.165: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.560551742s
Jan 31 12:54:52.217: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.611944569s
Jan 31 12:54:54.228: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.623027175s
Jan 31 12:54:57.416: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.811315401s
Jan 31 12:54:59.527: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.92192375s
STEP: Saw pod success
Jan 31 12:54:59.527: INFO: Pod "downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:54:59.582: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:54:59.873: INFO: Waiting for pod downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005 to disappear
Jan 31 12:55:00.001: INFO: Pod downwardapi-volume-d546cd18-4428-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:55:00.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ns4ts" for this suite.
Jan 31 12:55:06.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:55:06.530: INFO: namespace: e2e-tests-projected-ns4ts, resource: bindings, ignored listing per whitelist
Jan 31 12:55:06.533: INFO: namespace e2e-tests-projected-ns4ts deletion completed in 6.515403533s

• [SLOW TEST:31.263 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:55:06.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 12:55:06.875: INFO: Waiting up to 5m0s for pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-f8zb7" to be "success or failure"
Jan 31 12:55:07.032: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 157.419033ms
Jan 31 12:55:09.189: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313570831s
Jan 31 12:55:11.209: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333660568s
Jan 31 12:55:13.659: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.783517917s
Jan 31 12:55:15.701: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.826181383s
Jan 31 12:55:17.712: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.83658617s
Jan 31 12:55:19.762: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.886692214s
Jan 31 12:55:21.782: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.906712362s
STEP: Saw pod success
Jan 31 12:55:21.782: INFO: Pod "pod-e7eb3d19-4428-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:55:21.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e7eb3d19-4428-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 12:55:23.509: INFO: Waiting for pod pod-e7eb3d19-4428-11ea-aae6-0242ac110005 to disappear
Jan 31 12:55:24.151: INFO: Pod pod-e7eb3d19-4428-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:55:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f8zb7" for this suite.
Jan 31 12:55:30.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:55:30.756: INFO: namespace: e2e-tests-emptydir-f8zb7, resource: bindings, ignored listing per whitelist
Jan 31 12:55:30.916: INFO: namespace e2e-tests-emptydir-f8zb7 deletion completed in 6.751630244s

• [SLOW TEST:24.382 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:55:30.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 31 12:55:31.111: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 31 12:55:36.682: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 12:55:40.707: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 31 12:55:40.868: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-v5csz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v5csz/deployments/test-cleanup-deployment,UID:fc1a38bf-4428-11ea-a994-fa163e34d433,ResourceVersion:20086765,Generation:1,CreationTimestamp:2020-01-31 12:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 31 12:55:40.960: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan 31 12:55:40.960: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 31 12:55:40.961: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-v5csz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v5csz/replicasets/test-cleanup-controller,UID:f65e0e6d-4428-11ea-a994-fa163e34d433,ResourceVersion:20086767,Generation:1,CreationTimestamp:2020-01-31 12:55:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fc1a38bf-4428-11ea-a994-fa163e34d433 0xc002048e4f 0xc002048e60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 12:55:41.005: INFO: Pod "test-cleanup-controller-xhpv8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xhpv8,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-v5csz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v5csz/pods/test-cleanup-controller-xhpv8,UID:f660be42-4428-11ea-a994-fa163e34d433,ResourceVersion:20086763,Generation:0,CreationTimestamp:2020-01-31 12:55:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f65e0e6d-4428-11ea-a994-fa163e34d433 0xc0020494a7 0xc0020494a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mdszk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mdszk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mdszk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002049510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002049530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:55:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:55:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:55:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:55:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-31 12:55:31 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 12:55:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://db1256c67716fb56b9fd448d4ced1c4b44a61d2489b28308578742ab8a5e6a72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:55:41.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-v5csz" for this suite.
Jan 31 12:55:57.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:55:58.054: INFO: namespace: e2e-tests-deployment-v5csz, resource: bindings, ignored listing per whitelist
Jan 31 12:55:58.153: INFO: namespace e2e-tests-deployment-v5csz deletion completed in 16.892390388s

• [SLOW TEST:27.237 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:55:58.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:55:58.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005" in namespace "e2e-tests-downward-api-rgrqn" to be "success or failure"
Jan 31 12:55:58.561: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.554438ms
Jan 31 12:56:00.656: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142505535s
Jan 31 12:56:02.688: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173641307s
Jan 31 12:56:05.947: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.432976092s
Jan 31 12:56:07.962: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.447987801s
Jan 31 12:56:10.011: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.496718727s
STEP: Saw pod success
Jan 31 12:56:10.011: INFO: Pod "downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:56:10.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:56:11.185: INFO: Waiting for pod downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005 to disappear
Jan 31 12:56:11.209: INFO: Pod downwardapi-volume-069abf70-4429-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:56:11.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rgrqn" for this suite.
Jan 31 12:56:19.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:56:19.563: INFO: namespace: e2e-tests-downward-api-rgrqn, resource: bindings, ignored listing per whitelist
Jan 31 12:56:19.571: INFO: namespace e2e-tests-downward-api-rgrqn deletion completed in 8.321931903s

• [SLOW TEST:21.418 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:56:19.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:57:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-gplj4" for this suite.
Jan 31 12:57:39.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:57:39.632: INFO: namespace: e2e-tests-container-runtime-gplj4, resource: bindings, ignored listing per whitelist
Jan 31 12:57:39.834: INFO: namespace e2e-tests-container-runtime-gplj4 deletion completed in 8.295131667s

• [SLOW TEST:80.263 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:57:39.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 12:57:40.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-h2ct2" to be "success or failure"
Jan 31 12:57:40.583: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 229.289441ms
Jan 31 12:57:42.865: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511054729s
Jan 31 12:57:44.892: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53787198s
Jan 31 12:57:47.504: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14991991s
Jan 31 12:57:49.538: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183810502s
Jan 31 12:57:52.097: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.743746851s
STEP: Saw pod success
Jan 31 12:57:52.098: INFO: Pod "downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 12:57:52.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 12:57:52.682: INFO: Waiting for pod downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005 to disappear
Jan 31 12:57:52.688: INFO: Pod downwardapi-volume-4360e843-4429-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:57:52.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h2ct2" for this suite.
Jan 31 12:57:58.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:57:58.904: INFO: namespace: e2e-tests-projected-h2ct2, resource: bindings, ignored listing per whitelist
Jan 31 12:57:58.953: INFO: namespace e2e-tests-projected-h2ct2 deletion completed in 6.258913159s

• [SLOW TEST:19.119 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:57:58.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 31 12:58:09.967: INFO: Successfully updated pod "labelsupdate4e95df00-4429-11ea-aae6-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 12:58:12.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mdpjg" for this suite.
Jan 31 12:58:36.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 12:58:36.343: INFO: namespace: e2e-tests-projected-mdpjg, resource: bindings, ignored listing per whitelist
Jan 31 12:58:36.356: INFO: namespace e2e-tests-projected-mdpjg deletion completed in 24.254330341s

• [SLOW TEST:37.403 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 12:58:36.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zq9nh
Jan 31 12:58:48.725: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zq9nh
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 12:58:48.748: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:02:49.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zq9nh" for this suite.
Jan 31 13:02:57.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:02:57.693: INFO: namespace: e2e-tests-container-probe-zq9nh, resource: bindings, ignored listing per whitelist
Jan 31 13:02:57.783: INFO: namespace e2e-tests-container-probe-zq9nh deletion completed in 8.438915745s

• [SLOW TEST:261.426 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:02:57.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-00be27f1-442a-11ea-aae6-0242ac110005
STEP: Creating secret with name s-test-opt-upd-00be290d-442a-11ea-aae6-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-00be27f1-442a-11ea-aae6-0242ac110005
STEP: Updating secret s-test-opt-upd-00be290d-442a-11ea-aae6-0242ac110005
STEP: Creating secret with name s-test-opt-create-00be2930-442a-11ea-aae6-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:04:29.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6xnbx" for this suite.
Jan 31 13:05:01.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:05:01.508: INFO: namespace: e2e-tests-projected-6xnbx, resource: bindings, ignored listing per whitelist
Jan 31 13:05:01.658: INFO: namespace e2e-tests-projected-6xnbx deletion completed in 32.382843237s

• [SLOW TEST:123.875 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:05:01.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4aa1cd99-442a-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 13:05:01.990: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-jqvhx" to be "success or failure"
Jan 31 13:05:01.999: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960023ms
Jan 31 13:05:04.009: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01907694s
Jan 31 13:05:06.029: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038215869s
Jan 31 13:05:09.374: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.383743378s
Jan 31 13:05:11.429: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.438863573s
Jan 31 13:05:13.457: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.465956271s
Jan 31 13:05:15.481: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.490506346s
STEP: Saw pod success
Jan 31 13:05:15.481: INFO: Pod "pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:05:15.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 13:05:16.088: INFO: Waiting for pod pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:05:16.120: INFO: Pod pod-projected-secrets-4aa32516-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:05:16.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jqvhx" for this suite.
Jan 31 13:05:25.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:05:25.409: INFO: namespace: e2e-tests-projected-jqvhx, resource: bindings, ignored listing per whitelist
Jan 31 13:05:25.417: INFO: namespace e2e-tests-projected-jqvhx deletion completed in 8.445240557s

• [SLOW TEST:23.758 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:05:25.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-58c9e377-442a-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 13:05:25.893: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-xgrq4" to be "success or failure"
Jan 31 13:05:25.920: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.265811ms
Jan 31 13:05:28.045: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151966013s
Jan 31 13:05:30.159: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265835353s
Jan 31 13:05:32.236: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342440334s
Jan 31 13:05:34.495: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601864034s
Jan 31 13:05:36.659: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.766015664s
Jan 31 13:05:38.687: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.7940279s
Jan 31 13:05:40.703: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.809743867s
STEP: Saw pod success
Jan 31 13:05:40.703: INFO: Pod "pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:05:40.709: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 13:05:42.879: INFO: Waiting for pod pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:05:43.179: INFO: Pod pod-projected-configmaps-58de0f64-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:05:43.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xgrq4" for this suite.
Jan 31 13:05:49.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:05:49.563: INFO: namespace: e2e-tests-projected-xgrq4, resource: bindings, ignored listing per whitelist
Jan 31 13:05:49.625: INFO: namespace e2e-tests-projected-xgrq4 deletion completed in 6.418203859s

• [SLOW TEST:24.208 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:05:49.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-gck52/configmap-test-672eb82a-442a-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 31 13:05:50.017: INFO: Waiting up to 5m0s for pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-configmap-gck52" to be "success or failure"
Jan 31 13:05:50.046: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.348199ms
Jan 31 13:05:52.059: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041907789s
Jan 31 13:05:54.076: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05948252s
Jan 31 13:05:56.873: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855819833s
Jan 31 13:05:58.913: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.896362761s
Jan 31 13:06:00.941: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924119437s
Jan 31 13:06:02.964: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.946860884s
STEP: Saw pod success
Jan 31 13:06:02.964: INFO: Pod "pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:06:02.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005 container env-test: 
STEP: delete the pod
Jan 31 13:06:04.558: INFO: Waiting for pod pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:06:04.855: INFO: Pod pod-configmaps-6742db1c-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:06:04.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gck52" for this suite.
Jan 31 13:06:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:06:11.420: INFO: namespace: e2e-tests-configmap-gck52, resource: bindings, ignored listing per whitelist
Jan 31 13:06:11.420: INFO: namespace e2e-tests-configmap-gck52 deletion completed in 6.514787724s

• [SLOW TEST:21.790 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:06:11.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 31 13:06:26.402: INFO: Successfully updated pod "pod-update-7420ae6b-442a-11ea-aae6-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 31 13:06:26.420: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:06:26.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mcdff" for this suite.
Jan 31 13:06:50.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:06:50.882: INFO: namespace: e2e-tests-pods-mcdff, resource: bindings, ignored listing per whitelist
Jan 31 13:06:50.990: INFO: namespace e2e-tests-pods-mcdff deletion completed in 24.559968382s

• [SLOW TEST:39.569 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:06:50.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2fq8w
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2fq8w
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2fq8w
Jan 31 13:06:51.375: INFO: Found 0 stateful pods, waiting for 1
Jan 31 13:07:01.392: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 31 13:07:11.391: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 31 13:07:11.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 13:07:12.151: INFO: stderr: "I0131 13:07:11.670514    3715 log.go:172] (0xc00071e370) (0xc00073c640) Create stream\nI0131 13:07:11.670825    3715 log.go:172] (0xc00071e370) (0xc00073c640) Stream added, broadcasting: 1\nI0131 13:07:11.700522    3715 log.go:172] (0xc00071e370) Reply frame received for 1\nI0131 13:07:11.700580    3715 log.go:172] (0xc00071e370) (0xc00073c6e0) Create stream\nI0131 13:07:11.700594    3715 log.go:172] (0xc00071e370) (0xc00073c6e0) Stream added, broadcasting: 3\nI0131 13:07:11.701798    3715 log.go:172] (0xc00071e370) Reply frame received for 3\nI0131 13:07:11.701836    3715 log.go:172] (0xc00071e370) (0xc000642dc0) Create stream\nI0131 13:07:11.701870    3715 log.go:172] (0xc00071e370) (0xc000642dc0) Stream added, broadcasting: 5\nI0131 13:07:11.702888    3715 log.go:172] (0xc00071e370) Reply frame received for 5\nI0131 13:07:11.978377    3715 log.go:172] (0xc00071e370) Data frame received for 3\nI0131 13:07:11.978476    3715 log.go:172] (0xc00073c6e0) (3) Data frame handling\nI0131 13:07:11.978516    3715 log.go:172] (0xc00073c6e0) (3) Data frame sent\nI0131 13:07:12.131916    3715 log.go:172] (0xc00071e370) Data frame received for 1\nI0131 13:07:12.132067    3715 log.go:172] (0xc00073c640) (1) Data frame handling\nI0131 13:07:12.132121    3715 log.go:172] (0xc00073c640) (1) Data frame sent\nI0131 13:07:12.132708    3715 log.go:172] (0xc00071e370) (0xc00073c6e0) Stream removed, broadcasting: 3\nI0131 13:07:12.132871    3715 log.go:172] (0xc00071e370) (0xc00073c640) Stream removed, broadcasting: 1\nI0131 13:07:12.133740    3715 log.go:172] (0xc00071e370) (0xc000642dc0) Stream removed, broadcasting: 5\nI0131 13:07:12.134234    3715 log.go:172] (0xc00071e370) Go away received\nI0131 13:07:12.134411    3715 log.go:172] (0xc00071e370) (0xc00073c640) Stream removed, broadcasting: 1\nI0131 13:07:12.134493    3715 log.go:172] (0xc00071e370) (0xc00073c6e0) Stream removed, broadcasting: 3\nI0131 13:07:12.134538    3715 log.go:172] (0xc00071e370) (0xc000642dc0) Stream removed, broadcasting: 5\n"
Jan 31 13:07:12.152: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 13:07:12.152: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 13:07:12.190: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 13:07:22.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 13:07:22.223: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 13:07:22.287: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999552s
Jan 31 13:07:23.308: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978599672s
Jan 31 13:07:24.339: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.957024847s
Jan 31 13:07:25.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.926602972s
Jan 31 13:07:26.377: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.913359571s
Jan 31 13:07:27.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.888878922s
Jan 31 13:07:28.494: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.799742005s
Jan 31 13:07:30.167: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.771965487s
Jan 31 13:07:31.195: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.098432708s
Jan 31 13:07:32.286: INFO: Verifying statefulset ss doesn't scale past 1 for another 70.40166ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2fq8w
Jan 31 13:07:33.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 13:07:34.112: INFO: stderr: "I0131 13:07:33.592261    3738 log.go:172] (0xc000138580) (0xc000892500) Create stream\nI0131 13:07:33.592702    3738 log.go:172] (0xc000138580) (0xc000892500) Stream added, broadcasting: 1\nI0131 13:07:33.602770    3738 log.go:172] (0xc000138580) Reply frame received for 1\nI0131 13:07:33.602851    3738 log.go:172] (0xc000138580) (0xc000488be0) Create stream\nI0131 13:07:33.602874    3738 log.go:172] (0xc000138580) (0xc000488be0) Stream added, broadcasting: 3\nI0131 13:07:33.615584    3738 log.go:172] (0xc000138580) Reply frame received for 3\nI0131 13:07:33.615669    3738 log.go:172] (0xc000138580) (0xc000522000) Create stream\nI0131 13:07:33.615685    3738 log.go:172] (0xc000138580) (0xc000522000) Stream added, broadcasting: 5\nI0131 13:07:33.629909    3738 log.go:172] (0xc000138580) Reply frame received for 5\nI0131 13:07:33.861151    3738 log.go:172] (0xc000138580) Data frame received for 3\nI0131 13:07:33.861365    3738 log.go:172] (0xc000488be0) (3) Data frame handling\nI0131 13:07:33.861427    3738 log.go:172] (0xc000488be0) (3) Data frame sent\nI0131 13:07:34.098259    3738 log.go:172] (0xc000138580) Data frame received for 1\nI0131 13:07:34.098839    3738 log.go:172] (0xc000138580) (0xc000522000) Stream removed, broadcasting: 5\nI0131 13:07:34.099001    3738 log.go:172] (0xc000892500) (1) Data frame handling\nI0131 13:07:34.099054    3738 log.go:172] (0xc000892500) (1) Data frame sent\nI0131 13:07:34.099129    3738 log.go:172] (0xc000138580) (0xc000488be0) Stream removed, broadcasting: 3\nI0131 13:07:34.099320    3738 log.go:172] (0xc000138580) (0xc000892500) Stream removed, broadcasting: 1\nI0131 13:07:34.099548    3738 log.go:172] (0xc000138580) Go away received\nI0131 13:07:34.100584    3738 log.go:172] (0xc000138580) (0xc000892500) Stream removed, broadcasting: 1\nI0131 13:07:34.100617    3738 log.go:172] (0xc000138580) (0xc000488be0) Stream removed, broadcasting: 3\nI0131 13:07:34.100635    3738 log.go:172] (0xc000138580) (0xc000522000) Stream removed, broadcasting: 5\n"
Jan 31 13:07:34.112: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 13:07:34.112: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 13:07:34.142: INFO: Found 1 stateful pods, waiting for 3
Jan 31 13:07:44.165: INFO: Found 2 stateful pods, waiting for 3
Jan 31 13:07:54.164: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 13:07:54.165: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 13:07:54.165: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 31 13:07:54.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 13:07:54.962: INFO: stderr: "I0131 13:07:54.427859    3759 log.go:172] (0xc000138630) (0xc00065f400) Create stream\nI0131 13:07:54.428138    3759 log.go:172] (0xc000138630) (0xc00065f400) Stream added, broadcasting: 1\nI0131 13:07:54.436122    3759 log.go:172] (0xc000138630) Reply frame received for 1\nI0131 13:07:54.436177    3759 log.go:172] (0xc000138630) (0xc000756000) Create stream\nI0131 13:07:54.436187    3759 log.go:172] (0xc000138630) (0xc000756000) Stream added, broadcasting: 3\nI0131 13:07:54.437546    3759 log.go:172] (0xc000138630) Reply frame received for 3\nI0131 13:07:54.437570    3759 log.go:172] (0xc000138630) (0xc00065f4a0) Create stream\nI0131 13:07:54.437579    3759 log.go:172] (0xc000138630) (0xc00065f4a0) Stream added, broadcasting: 5\nI0131 13:07:54.439654    3759 log.go:172] (0xc000138630) Reply frame received for 5\nI0131 13:07:54.760129    3759 log.go:172] (0xc000138630) Data frame received for 3\nI0131 13:07:54.760243    3759 log.go:172] (0xc000756000) (3) Data frame handling\nI0131 13:07:54.760282    3759 log.go:172] (0xc000756000) (3) Data frame sent\nI0131 13:07:54.945128    3759 log.go:172] (0xc000138630) Data frame received for 1\nI0131 13:07:54.945202    3759 log.go:172] (0xc00065f400) (1) Data frame handling\nI0131 13:07:54.945216    3759 log.go:172] (0xc00065f400) (1) Data frame sent\nI0131 13:07:54.945245    3759 log.go:172] (0xc000138630) (0xc00065f400) Stream removed, broadcasting: 1\nI0131 13:07:54.945763    3759 log.go:172] (0xc000138630) (0xc000756000) Stream removed, broadcasting: 3\nI0131 13:07:54.947337    3759 log.go:172] (0xc000138630) (0xc00065f4a0) Stream removed, broadcasting: 5\nI0131 13:07:54.947470    3759 log.go:172] (0xc000138630) (0xc00065f400) Stream removed, broadcasting: 1\nI0131 13:07:54.947500    3759 log.go:172] (0xc000138630) (0xc000756000) Stream removed, broadcasting: 3\nI0131 13:07:54.947514    3759 log.go:172] (0xc000138630) (0xc00065f4a0) Stream removed, broadcasting: 5\n"
Jan 31 13:07:54.963: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 13:07:54.963: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 13:07:54.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 13:07:55.722: INFO: stderr: "I0131 13:07:55.307544    3781 log.go:172] (0xc0007722c0) (0xc0006e45a0) Create stream\nI0131 13:07:55.307786    3781 log.go:172] (0xc0007722c0) (0xc0006e45a0) Stream added, broadcasting: 1\nI0131 13:07:55.312080    3781 log.go:172] (0xc0007722c0) Reply frame received for 1\nI0131 13:07:55.312139    3781 log.go:172] (0xc0007722c0) (0xc0006e4640) Create stream\nI0131 13:07:55.312154    3781 log.go:172] (0xc0007722c0) (0xc0006e4640) Stream added, broadcasting: 3\nI0131 13:07:55.313399    3781 log.go:172] (0xc0007722c0) Reply frame received for 3\nI0131 13:07:55.313427    3781 log.go:172] (0xc0007722c0) (0xc0005ea000) Create stream\nI0131 13:07:55.313438    3781 log.go:172] (0xc0007722c0) (0xc0005ea000) Stream added, broadcasting: 5\nI0131 13:07:55.314151    3781 log.go:172] (0xc0007722c0) Reply frame received for 5\nI0131 13:07:55.510764    3781 log.go:172] (0xc0007722c0) Data frame received for 3\nI0131 13:07:55.510844    3781 log.go:172] (0xc0006e4640) (3) Data frame handling\nI0131 13:07:55.510875    3781 log.go:172] (0xc0006e4640) (3) Data frame sent\nI0131 13:07:55.702430    3781 log.go:172] (0xc0007722c0) Data frame received for 1\nI0131 13:07:55.702540    3781 log.go:172] (0xc0006e45a0) (1) Data frame handling\nI0131 13:07:55.702591    3781 log.go:172] (0xc0006e45a0) (1) Data frame sent\nI0131 13:07:55.703174    3781 log.go:172] (0xc0007722c0) (0xc0006e4640) Stream removed, broadcasting: 3\nI0131 13:07:55.703374    3781 log.go:172] (0xc0007722c0) (0xc0006e45a0) Stream removed, broadcasting: 1\nI0131 13:07:55.706174    3781 log.go:172] (0xc0007722c0) (0xc0005ea000) Stream removed, broadcasting: 5\nI0131 13:07:55.706515    3781 log.go:172] (0xc0007722c0) Go away received\nI0131 13:07:55.706910    3781 log.go:172] (0xc0007722c0) (0xc0006e45a0) Stream removed, broadcasting: 1\nI0131 13:07:55.707044    3781 log.go:172] (0xc0007722c0) (0xc0006e4640) Stream removed, broadcasting: 3\nI0131 13:07:55.707153    3781 log.go:172] (0xc0007722c0) (0xc0005ea000) Stream removed, broadcasting: 5\n"
Jan 31 13:07:55.722: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 13:07:55.722: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 13:07:55.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 13:07:56.396: INFO: stderr: "I0131 13:07:55.996064    3803 log.go:172] (0xc000712370) (0xc000730640) Create stream\nI0131 13:07:55.996317    3803 log.go:172] (0xc000712370) (0xc000730640) Stream added, broadcasting: 1\nI0131 13:07:56.001654    3803 log.go:172] (0xc000712370) Reply frame received for 1\nI0131 13:07:56.001730    3803 log.go:172] (0xc000712370) (0xc00067ebe0) Create stream\nI0131 13:07:56.001742    3803 log.go:172] (0xc000712370) (0xc00067ebe0) Stream added, broadcasting: 3\nI0131 13:07:56.002846    3803 log.go:172] (0xc000712370) Reply frame received for 3\nI0131 13:07:56.002984    3803 log.go:172] (0xc000712370) (0xc0006ce000) Create stream\nI0131 13:07:56.003006    3803 log.go:172] (0xc000712370) (0xc0006ce000) Stream added, broadcasting: 5\nI0131 13:07:56.008249    3803 log.go:172] (0xc000712370) Reply frame received for 5\nI0131 13:07:56.154687    3803 log.go:172] (0xc000712370) Data frame received for 3\nI0131 13:07:56.154802    3803 log.go:172] (0xc00067ebe0) (3) Data frame handling\nI0131 13:07:56.154821    3803 log.go:172] (0xc00067ebe0) (3) Data frame sent\nI0131 13:07:56.382421    3803 log.go:172] (0xc000712370) (0xc00067ebe0) Stream removed, broadcasting: 3\nI0131 13:07:56.382804    3803 log.go:172] (0xc000712370) Data frame received for 1\nI0131 13:07:56.382837    3803 log.go:172] (0xc000730640) (1) Data frame handling\nI0131 13:07:56.382891    3803 log.go:172] (0xc000730640) (1) Data frame sent\nI0131 13:07:56.383039    3803 log.go:172] (0xc000712370) (0xc000730640) Stream removed, broadcasting: 1\nI0131 13:07:56.383277    3803 log.go:172] (0xc000712370) (0xc0006ce000) Stream removed, broadcasting: 5\nI0131 13:07:56.383436    3803 log.go:172] (0xc000712370) Go away received\nI0131 13:07:56.383742    3803 log.go:172] (0xc000712370) (0xc000730640) Stream removed, broadcasting: 1\nI0131 13:07:56.383780    3803 log.go:172] (0xc000712370) (0xc00067ebe0) Stream removed, broadcasting: 3\nI0131 13:07:56.383794    3803 log.go:172] (0xc000712370) (0xc0006ce000) Stream removed, broadcasting: 5\n"
Jan 31 13:07:56.396: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 13:07:56.396: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 13:07:56.396: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 13:07:56.418: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 31 13:08:06.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 13:08:06.451: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 13:08:06.451: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 13:08:06.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999502s
Jan 31 13:08:07.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973989806s
Jan 31 13:08:08.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.954604183s
Jan 31 13:08:09.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.896277462s
Jan 31 13:08:10.651: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.86964385s
Jan 31 13:08:11.668: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.818717384s
Jan 31 13:08:12.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.801110543s
Jan 31 13:08:13.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.772613655s
Jan 31 13:08:14.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.749289634s
Jan 31 13:08:15.789: INFO: Verifying statefulset ss doesn't scale past 3 for another 728.129956ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2fq8w
Jan 31 13:08:16.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 13:08:17.459: INFO: stderr: "I0131 13:08:17.092829    3825 log.go:172] (0xc00014c6e0) (0xc000687540) Create stream\nI0131 13:08:17.093089    3825 log.go:172] (0xc00014c6e0) (0xc000687540) Stream added, broadcasting: 1\nI0131 13:08:17.099744    3825 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0131 13:08:17.099800    3825 log.go:172] (0xc00014c6e0) (0xc0001c0460) Create stream\nI0131 13:08:17.099818    3825 log.go:172] (0xc00014c6e0) (0xc0001c0460) Stream added, broadcasting: 3\nI0131 13:08:17.100757    3825 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0131 13:08:17.100829    3825 log.go:172] (0xc00014c6e0) (0xc00012a000) Create stream\nI0131 13:08:17.100842    3825 log.go:172] (0xc00014c6e0) (0xc00012a000) Stream added, broadcasting: 5\nI0131 13:08:17.102003    3825 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0131 13:08:17.223589    3825 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0131 13:08:17.223742    3825 log.go:172] (0xc0001c0460) (3) Data frame handling\nI0131 13:08:17.223797    3825 log.go:172] (0xc0001c0460) (3) Data frame sent\nI0131 13:08:17.443851    3825 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0131 13:08:17.443999    3825 log.go:172] (0xc00014c6e0) (0xc0001c0460) Stream removed, broadcasting: 3\nI0131 13:08:17.444082    3825 log.go:172] (0xc000687540) (1) Data frame handling\nI0131 13:08:17.444128    3825 log.go:172] (0xc000687540) (1) Data frame sent\nI0131 13:08:17.444184    3825 log.go:172] (0xc00014c6e0) (0xc00012a000) Stream removed, broadcasting: 5\nI0131 13:08:17.444233    3825 log.go:172] (0xc00014c6e0) (0xc000687540) Stream removed, broadcasting: 1\nI0131 13:08:17.444278    3825 log.go:172] (0xc00014c6e0) Go away received\nI0131 13:08:17.445074    3825 log.go:172] (0xc00014c6e0) (0xc000687540) Stream removed, broadcasting: 1\nI0131 13:08:17.445127    3825 log.go:172] (0xc00014c6e0) (0xc0001c0460) Stream removed, broadcasting: 3\nI0131 13:08:17.445157    3825 log.go:172] (0xc00014c6e0) (0xc00012a000) Stream removed, broadcasting: 5\n"
Jan 31 13:08:17.460: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 13:08:17.460: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 13:08:17.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 13:08:18.068: INFO: stderr: "I0131 13:08:17.765971    3847 log.go:172] (0xc00066a0b0) (0xc0006ae5a0) Create stream\nI0131 13:08:17.766098    3847 log.go:172] (0xc00066a0b0) (0xc0006ae5a0) Stream added, broadcasting: 1\nI0131 13:08:17.774102    3847 log.go:172] (0xc00066a0b0) Reply frame received for 1\nI0131 13:08:17.774143    3847 log.go:172] (0xc00066a0b0) (0xc00061ae60) Create stream\nI0131 13:08:17.774154    3847 log.go:172] (0xc00066a0b0) (0xc00061ae60) Stream added, broadcasting: 3\nI0131 13:08:17.776131    3847 log.go:172] (0xc00066a0b0) Reply frame received for 3\nI0131 13:08:17.776229    3847 log.go:172] (0xc00066a0b0) (0xc0002e0000) Create stream\nI0131 13:08:17.776236    3847 log.go:172] (0xc00066a0b0) (0xc0002e0000) Stream added, broadcasting: 5\nI0131 13:08:17.777884    3847 log.go:172] (0xc00066a0b0) Reply frame received for 5\nI0131 13:08:17.908102    3847 log.go:172] (0xc00066a0b0) Data frame received for 3\nI0131 13:08:17.908237    3847 log.go:172] (0xc00061ae60) (3) Data frame handling\nI0131 13:08:17.908258    3847 log.go:172] (0xc00061ae60) (3) Data frame sent\nI0131 13:08:18.058832    3847 log.go:172] (0xc00066a0b0) (0xc0002e0000) Stream removed, broadcasting: 5\nI0131 13:08:18.059059    3847 log.go:172] (0xc00066a0b0) Data frame received for 1\nI0131 13:08:18.059116    3847 log.go:172] (0xc00066a0b0) (0xc00061ae60) Stream removed, broadcasting: 3\nI0131 13:08:18.059190    3847 log.go:172] (0xc0006ae5a0) (1) Data frame handling\nI0131 13:08:18.059245    3847 log.go:172] (0xc0006ae5a0) (1) Data frame sent\nI0131 13:08:18.059261    3847 log.go:172] (0xc00066a0b0) (0xc0006ae5a0) Stream removed, broadcasting: 1\nI0131 13:08:18.059278    3847 log.go:172] (0xc00066a0b0) Go away received\nI0131 13:08:18.059709    3847 log.go:172] (0xc00066a0b0) (0xc0006ae5a0) Stream removed, broadcasting: 1\nI0131 13:08:18.059728    3847 log.go:172] (0xc00066a0b0) (0xc00061ae60) Stream removed, broadcasting: 3\nI0131 13:08:18.059753    3847 log.go:172] (0xc00066a0b0) (0xc0002e0000) Stream removed, broadcasting: 5\n"
Jan 31 13:08:18.068: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 13:08:18.068: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 13:08:18.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2fq8w ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 13:08:19.004: INFO: stderr: "I0131 13:08:18.287299    3868 log.go:172] (0xc00070e370) (0xc000738640) Create stream\nI0131 13:08:18.287478    3868 log.go:172] (0xc00070e370) (0xc000738640) Stream added, broadcasting: 1\nI0131 13:08:18.294512    3868 log.go:172] (0xc00070e370) Reply frame received for 1\nI0131 13:08:18.294586    3868 log.go:172] (0xc00070e370) (0xc000666d20) Create stream\nI0131 13:08:18.294595    3868 log.go:172] (0xc00070e370) (0xc000666d20) Stream added, broadcasting: 3\nI0131 13:08:18.295446    3868 log.go:172] (0xc00070e370) Reply frame received for 3\nI0131 13:08:18.295469    3868 log.go:172] (0xc00070e370) (0xc0005fa000) Create stream\nI0131 13:08:18.295479    3868 log.go:172] (0xc00070e370) (0xc0005fa000) Stream added, broadcasting: 5\nI0131 13:08:18.296697    3868 log.go:172] (0xc00070e370) Reply frame received for 5\nI0131 13:08:18.420608    3868 log.go:172] (0xc00070e370) Data frame received for 3\nI0131 13:08:18.420855    3868 log.go:172] (0xc000666d20) (3) Data frame handling\nI0131 13:08:18.420910    3868 log.go:172] (0xc000666d20) (3) Data frame sent\nI0131 13:08:18.993955    3868 log.go:172] (0xc00070e370) Data frame received for 1\nI0131 13:08:18.994124    3868 log.go:172] (0xc00070e370) (0xc000666d20) Stream removed, broadcasting: 3\nI0131 13:08:18.994216    3868 log.go:172] (0xc000738640) (1) Data frame handling\nI0131 13:08:18.994237    3868 log.go:172] (0xc000738640) (1) Data frame sent\nI0131 13:08:18.994361    3868 log.go:172] (0xc00070e370) (0xc000738640) Stream removed, broadcasting: 1\nI0131 13:08:18.994434    3868 log.go:172] (0xc00070e370) (0xc0005fa000) Stream removed, broadcasting: 5\nI0131 13:08:18.994464    3868 log.go:172] (0xc00070e370) Go away received\nI0131 13:08:18.994813    3868 log.go:172] (0xc00070e370) (0xc000738640) Stream removed, broadcasting: 1\nI0131 13:08:18.994824    3868 log.go:172] (0xc00070e370) (0xc000666d20) Stream removed, broadcasting: 3\nI0131 13:08:18.994828    3868 log.go:172] (0xc00070e370) (0xc0005fa000) Stream removed, broadcasting: 5\n"
Jan 31 13:08:19.004: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 13:08:19.004: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 13:08:19.004: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 31 13:08:39.060: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2fq8w
Jan 31 13:08:39.071: INFO: Scaling statefulset ss to 0
Jan 31 13:08:39.097: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 13:08:39.102: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:08:39.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2fq8w" for this suite.
Jan 31 13:08:47.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:08:47.317: INFO: namespace: e2e-tests-statefulset-2fq8w, resource: bindings, ignored listing per whitelist
Jan 31 13:08:47.362: INFO: namespace e2e-tests-statefulset-2fq8w deletion completed in 8.215669414s

• [SLOW TEST:116.371 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:08:47.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 31 13:08:47.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-f2zj4" to be "success or failure"
Jan 31 13:08:47.724: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.223364ms
Jan 31 13:08:49.926: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21332378s
Jan 31 13:08:51.947: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23434989s
Jan 31 13:08:54.009: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296124409s
Jan 31 13:08:56.030: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317099761s
Jan 31 13:08:58.043: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.330632832s
STEP: Saw pod success
Jan 31 13:08:58.044: INFO: Pod "downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:08:58.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005 container client-container: 
STEP: delete the pod
Jan 31 13:08:58.160: INFO: Waiting for pod downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:08:58.375: INFO: Pod downwardapi-volume-d12dcac5-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:08:58.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f2zj4" for this suite.
Jan 31 13:09:04.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:09:04.761: INFO: namespace: e2e-tests-projected-f2zj4, resource: bindings, ignored listing per whitelist
Jan 31 13:09:04.764: INFO: namespace e2e-tests-projected-f2zj4 deletion completed in 6.360512277s

• [SLOW TEST:17.402 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:09:04.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 13:09:05.072: INFO: Waiting up to 5m0s for pod "pod-db7e2975-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-l6hgw" to be "success or failure"
Jan 31 13:09:05.105: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.616845ms
Jan 31 13:09:07.141: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068601169s
Jan 31 13:09:09.202: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129638972s
Jan 31 13:09:11.566: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493307729s
Jan 31 13:09:13.578: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505155559s
Jan 31 13:09:15.606: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.532998193s
Jan 31 13:09:18.094: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.021106268s
STEP: Saw pod success
Jan 31 13:09:18.094: INFO: Pod "pod-db7e2975-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:09:18.112: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db7e2975-442a-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 13:09:18.396: INFO: Waiting for pod pod-db7e2975-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:09:18.407: INFO: Pod pod-db7e2975-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:09:18.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l6hgw" for this suite.
Jan 31 13:09:24.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:09:24.722: INFO: namespace: e2e-tests-emptydir-l6hgw, resource: bindings, ignored listing per whitelist
Jan 31 13:09:24.761: INFO: namespace e2e-tests-emptydir-l6hgw deletion completed in 6.346140535s

• [SLOW TEST:19.996 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:09:24.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 31 13:09:37.603: INFO: Successfully updated pod "labelsupdatee756a550-442a-11ea-aae6-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:09:39.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-q76cb" for this suite.
Jan 31 13:10:03.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:10:04.116: INFO: namespace: e2e-tests-downward-api-q76cb, resource: bindings, ignored listing per whitelist
Jan 31 13:10:04.130: INFO: namespace e2e-tests-downward-api-q76cb deletion completed in 24.415571816s

• [SLOW TEST:39.369 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:10:04.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 31 13:10:04.382: INFO: Waiting up to 5m0s for pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005" in namespace "e2e-tests-emptydir-69nlq" to be "success or failure"
Jan 31 13:10:04.399: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.154177ms
Jan 31 13:10:06.578: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195455153s
Jan 31 13:10:08.676: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293701529s
Jan 31 13:10:10.689: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307006103s
Jan 31 13:10:12.730: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34769096s
Jan 31 13:10:14.763: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.381207699s
Jan 31 13:10:17.302: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.920047909s
STEP: Saw pod success
Jan 31 13:10:17.302: INFO: Pod "pod-fee0e9ae-442a-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:10:17.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fee0e9ae-442a-11ea-aae6-0242ac110005 container test-container: 
STEP: delete the pod
Jan 31 13:10:18.022: INFO: Waiting for pod pod-fee0e9ae-442a-11ea-aae6-0242ac110005 to disappear
Jan 31 13:10:18.058: INFO: Pod pod-fee0e9ae-442a-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:10:18.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-69nlq" for this suite.
Jan 31 13:10:24.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:10:24.325: INFO: namespace: e2e-tests-emptydir-69nlq, resource: bindings, ignored listing per whitelist
Jan 31 13:10:24.372: INFO: namespace e2e-tests-emptydir-69nlq deletion completed in 6.305519888s

• [SLOW TEST:20.242 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 31 13:10:24.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0b081753-442b-11ea-aae6-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 31 13:10:24.853: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005" in namespace "e2e-tests-projected-4rffb" to be "success or failure"
Jan 31 13:10:24.865: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.893513ms
Jan 31 13:10:26.941: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087979861s
Jan 31 13:10:28.972: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11877846s
Jan 31 13:10:31.060: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206761752s
Jan 31 13:10:33.188: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335199666s
Jan 31 13:10:35.221: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.368421002s
Jan 31 13:10:38.139: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.286140506s
STEP: Saw pod success
Jan 31 13:10:38.139: INFO: Pod "pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005" satisfied condition "success or failure"
Jan 31 13:10:38.595: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 13:10:38.670: INFO: Waiting for pod pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005 to disappear
Jan 31 13:10:38.678: INFO: Pod pod-projected-secrets-0b137476-442b-11ea-aae6-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 31 13:10:38.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4rffb" for this suite.
Jan 31 13:10:44.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:10:44.995: INFO: namespace: e2e-tests-projected-4rffb, resource: bindings, ignored listing per whitelist
Jan 31 13:10:45.121: INFO: namespace e2e-tests-projected-4rffb deletion completed in 6.279112795s

• [SLOW TEST:20.747 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSJan 31 13:10:45.121: INFO: Running AfterSuite actions on all nodes
Jan 31 13:10:45.121: INFO: Running AfterSuite actions on node 1
Jan 31 13:10:45.121: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8606.926 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8607.65s)
FAIL