I0701 10:46:47.165742 8 e2e.go:224] Starting e2e run "85df4400-9bed-11e9-9f49-0242ac110006" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1561978006 - Will randomize all specs Will run 201 of 2162 specs Jul 1 10:46:47.309: INFO: >>> kubeConfig: /root/.kube/config Jul 1 10:46:47.312: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 1 10:46:47.322: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 1 10:46:47.345: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 1 10:46:47.345: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 1 10:46:47.345: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 1 10:46:47.381: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 1 10:46:47.381: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jul 1 10:46:47.381: INFO: e2e test version: v1.13.7 Jul 1 10:46:47.383: INFO: kube-apiserver version: v1.13.7 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:46:47.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jul 1 10:46:47.448: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jul 1 10:46:47.450: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix134784217/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:46:47.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xrmd7" for this suite. Jul 1 10:46:53.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:46:53.610: INFO: namespace: e2e-tests-kubectl-xrmd7, resource: bindings, ignored listing per whitelist Jul 1 10:46:53.681: INFO: namespace e2e-tests-kubectl-xrmd7 deletion completed in 6.171796222s • [SLOW TEST:6.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:46:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 10:46:53.904: INFO: Number of nodes with available pods: 0 Jul 1 10:46:53.904: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:54.913: INFO: Number of nodes with available pods: 0 Jul 1 10:46:54.914: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:55.930: INFO: Number of nodes with available pods: 0 Jul 1 10:46:55.930: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:56.914: INFO: Number of nodes with available pods: 1 Jul 1 10:46:56.914: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 1 10:46:56.937: INFO: Number of nodes with available pods: 0 Jul 1 10:46:56.937: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:57.944: INFO: Number of nodes with available pods: 0 Jul 1 10:46:57.944: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:58.944: INFO: Number of nodes with available pods: 0 Jul 1 10:46:58.944: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:46:59.947: INFO: Number of nodes with available pods: 0 Jul 1 10:46:59.947: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:47:00.948: INFO: Number of nodes with available pods: 0 Jul 1 10:47:00.948: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:47:02.008: INFO: Number of nodes with available pods: 0 Jul 1 10:47:02.008: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:47:02.948: INFO: Number of nodes with available pods: 0 Jul 1 10:47:02.948: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:47:03.943: INFO: Number of nodes with available pods: 1 Jul 1 10:47:03.943: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-t6nl6, will wait for the garbage collector to delete the pods Jul 1 10:47:04.059: INFO: Deleting DaemonSet.extensions daemon-set took: 60.672408ms Jul 1 10:47:04.159: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237648ms Jul 1 10:47:15.964: INFO: Number of nodes with available pods: 0 Jul 1 10:47:15.964: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 10:47:16.024: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t6nl6/daemonsets","resourceVersion":"1839579"},"items":null} Jul 1 10:47:16.031: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t6nl6/pods","resourceVersion":"1839580"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:47:16.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-t6nl6" for this suite. Jul 1 10:47:22.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:47:22.123: INFO: namespace: e2e-tests-daemonsets-t6nl6, resource: bindings, ignored listing per whitelist Jul 1 10:47:22.133: INFO: namespace e2e-tests-daemonsets-t6nl6 deletion completed in 6.088311119s • [SLOW TEST:28.452 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:47:22.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0701 10:47:33.107730 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 10:47:33.107: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:47:33.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-q6v65" for this suite. Jul 1 10:47:41.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:47:41.250: INFO: namespace: e2e-tests-gc-q6v65, resource: bindings, ignored listing per whitelist Jul 1 10:47:41.277: INFO: namespace e2e-tests-gc-q6v65 deletion completed in 8.164767097s • [SLOW TEST:19.143 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:47:41.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-a67c9e19-9bed-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 10:47:41.452: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-jkg58" to be "success or failure" Jul 1 10:47:41.456: INFO: Pod "pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031871ms Jul 1 10:47:43.463: INFO: Pod "pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010938137s Jul 1 10:47:45.469: INFO: Pod "pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016656901s STEP: Saw pod success Jul 1 10:47:45.469: INFO: Pod "pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:47:45.474: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006 container secret-volume-test: STEP: delete the pod Jul 1 10:47:45.569: INFO: Waiting for pod pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006 to disappear Jul 1 10:47:45.580: INFO: Pod pod-projected-secrets-a67d7e49-9bed-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:47:45.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jkg58" for this suite. Jul 1 10:47:51.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:47:51.760: INFO: namespace: e2e-tests-projected-jkg58, resource: bindings, ignored listing per whitelist Jul 1 10:47:51.815: INFO: namespace e2e-tests-projected-jkg58 deletion completed in 6.225906058s • [SLOW TEST:10.538 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:47:51.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:47:55.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6jg5x" for this suite. Jul 1 10:48:02.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:48:02.055: INFO: namespace: e2e-tests-kubelet-test-6jg5x, resource: bindings, ignored listing per whitelist Jul 1 10:48:02.080: INFO: namespace e2e-tests-kubelet-test-6jg5x deletion completed in 6.088394189s • [SLOW TEST:10.265 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:48:02.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 10:48:02.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d6zq8' Jul 1 10:48:03.940: INFO: stderr: "" Jul 1 10:48:03.940: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 1 10:48:08.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d6zq8 -o json' Jul 1 10:48:09.093: INFO: stderr: "" Jul 1 10:48:09.093: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-07-01T10:48:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-d6zq8\",\n \"resourceVersion\": \"1839911\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-d6zq8/pods/e2e-test-nginx-pod\",\n \"uid\": \"b3e38b79-9bed-11e9-a678-fa163e0cec1d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lbc9s\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-x6tdbol33slm\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lbc9s\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lbc9s\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-07-01T10:48:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-07-01T10:48:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-07-01T10:48:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-07-01T10:48:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://4e713172063aec7c5a9782e93042eaa322886fea4e97711afe36b1e294158725\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-07-01T10:48:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"192.168.100.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.5\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-07-01T10:48:03Z\"\n }\n}\n" STEP: replace the image in the pod Jul 1 10:48:09.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-d6zq8' Jul 1 10:48:09.384: INFO: stderr: "" Jul 1 10:48:09.384: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 1 10:48:09.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-d6zq8' Jul 1 10:48:12.428: INFO: stderr: "" Jul 1 10:48:12.428: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:48:12.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d6zq8" for this suite. Jul 1 10:48:18.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:48:18.564: INFO: namespace: e2e-tests-kubectl-d6zq8, resource: bindings, ignored listing per whitelist Jul 1 10:48:18.647: INFO: namespace e2e-tests-kubectl-d6zq8 deletion completed in 6.145373186s • [SLOW TEST:16.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:48:18.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-wjhw STEP: Creating a pod to test atomic-volume-subpath Jul 1 10:48:18.781: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wjhw" in namespace "e2e-tests-subpath-5tfzb" to be "success or failure" Jul 1 10:48:18.786: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Pending", Reason="", readiness=false. Elapsed: 5.572948ms Jul 1 10:48:20.791: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01020571s Jul 1 10:48:22.796: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015128916s Jul 1 10:48:24.801: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 6.020415264s Jul 1 10:48:26.817: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 8.036072176s Jul 1 10:48:28.820: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 10.039677524s Jul 1 10:48:30.826: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 12.045412348s Jul 1 10:48:32.833: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 14.052187964s Jul 1 10:48:34.838: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 16.05692046s Jul 1 10:48:36.845: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 18.064122994s Jul 1 10:48:38.866: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 20.085033945s Jul 1 10:48:40.872: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Running", Reason="", readiness=false. Elapsed: 22.091145132s Jul 1 10:48:42.882: INFO: Pod "pod-subpath-test-configmap-wjhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.101422201s STEP: Saw pod success Jul 1 10:48:42.882: INFO: Pod "pod-subpath-test-configmap-wjhw" satisfied condition "success or failure" Jul 1 10:48:42.889: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-configmap-wjhw container test-container-subpath-configmap-wjhw: STEP: delete the pod Jul 1 10:48:42.984: INFO: Waiting for pod pod-subpath-test-configmap-wjhw to disappear Jul 1 10:48:42.992: INFO: Pod pod-subpath-test-configmap-wjhw no longer exists STEP: Deleting pod pod-subpath-test-configmap-wjhw Jul 1 10:48:42.992: INFO: Deleting pod "pod-subpath-test-configmap-wjhw" in namespace "e2e-tests-subpath-5tfzb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:48:43.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5tfzb" for this suite. Jul 1 10:48:49.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:48:49.054: INFO: namespace: e2e-tests-subpath-5tfzb, resource: bindings, ignored listing per whitelist Jul 1 10:48:49.121: INFO: namespace e2e-tests-subpath-5tfzb deletion completed in 6.115144466s • [SLOW TEST:30.475 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:48:49.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 10:48:49.242: INFO: Waiting up to 5m0s for pod "pod-cee77173-9bed-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-fv8vr" to be "success or failure" Jul 1 10:48:49.312: INFO: Pod "pod-cee77173-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 69.861952ms Jul 1 10:48:51.316: INFO: Pod "pod-cee77173-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074484099s Jul 1 10:48:53.322: INFO: Pod "pod-cee77173-9bed-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079945894s STEP: Saw pod success Jul 1 10:48:53.322: INFO: Pod "pod-cee77173-9bed-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:48:53.326: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-cee77173-9bed-11e9-9f49-0242ac110006 container test-container: STEP: delete the pod Jul 1 10:48:54.132: INFO: Waiting for pod pod-cee77173-9bed-11e9-9f49-0242ac110006 to disappear Jul 1 10:48:54.155: INFO: Pod pod-cee77173-9bed-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:48:54.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fv8vr" for this suite. Jul 1 10:49:00.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:49:00.239: INFO: namespace: e2e-tests-emptydir-fv8vr, resource: bindings, ignored listing per whitelist Jul 1 10:49:00.268: INFO: namespace e2e-tests-emptydir-fv8vr deletion completed in 6.109617052s • [SLOW TEST:11.146 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:49:00.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 1 10:49:00.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:00.657: INFO: stderr: "" Jul 1 10:49:00.657: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 10:49:00.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:00.843: INFO: stderr: "" Jul 1 10:49:00.843: INFO: stdout: "update-demo-nautilus-cpg9s update-demo-nautilus-dvp29 " Jul 1 10:49:00.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:01.038: INFO: stderr: "" Jul 1 10:49:01.038: INFO: stdout: "" Jul 1 10:49:01.038: INFO: update-demo-nautilus-cpg9s is created but not running Jul 1 10:49:06.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:06.161: INFO: stderr: "" Jul 1 10:49:06.161: INFO: stdout: "update-demo-nautilus-cpg9s update-demo-nautilus-dvp29 " Jul 1 10:49:06.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:06.242: INFO: stderr: "" Jul 1 10:49:06.242: INFO: stdout: "true" Jul 1 10:49:06.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:06.313: INFO: stderr: "" Jul 1 10:49:06.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:06.313: INFO: validating pod update-demo-nautilus-cpg9s Jul 1 10:49:06.333: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:06.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:06.333: INFO: update-demo-nautilus-cpg9s is verified up and running Jul 1 10:49:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvp29 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:06.416: INFO: stderr: "" Jul 1 10:49:06.416: INFO: stdout: "true" Jul 1 10:49:06.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvp29 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:06.487: INFO: stderr: "" Jul 1 10:49:06.487: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:06.487: INFO: validating pod update-demo-nautilus-dvp29 Jul 1 10:49:06.491: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:06.491: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:06.491: INFO: update-demo-nautilus-dvp29 is verified up and running STEP: scaling down the replication controller Jul 1 10:49:06.493: INFO: scanned /root for discovery docs: Jul 1 10:49:06.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:07.599: INFO: stderr: "" Jul 1 10:49:07.599: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 10:49:07.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:07.703: INFO: stderr: "" Jul 1 10:49:07.703: INFO: stdout: "update-demo-nautilus-cpg9s update-demo-nautilus-dvp29 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 10:49:12.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:12.843: INFO: stderr: "" Jul 1 10:49:12.844: INFO: stdout: "update-demo-nautilus-cpg9s " Jul 1 10:49:12.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:12.954: INFO: stderr: "" Jul 1 10:49:12.954: INFO: stdout: "true" Jul 1 10:49:12.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:13.039: INFO: stderr: "" Jul 1 10:49:13.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:13.039: INFO: validating pod update-demo-nautilus-cpg9s Jul 1 10:49:13.046: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:13.046: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:13.046: INFO: update-demo-nautilus-cpg9s is verified up and running STEP: scaling up the replication controller Jul 1 10:49:13.048: INFO: scanned /root for discovery docs: Jul 1 10:49:13.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:14.159: INFO: stderr: "" Jul 1 10:49:14.159: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 10:49:14.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:14.261: INFO: stderr: "" Jul 1 10:49:14.261: INFO: stdout: "update-demo-nautilus-cpg9s update-demo-nautilus-p5mzg " Jul 1 10:49:14.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:14.343: INFO: stderr: "" Jul 1 10:49:14.343: INFO: stdout: "true" Jul 1 10:49:14.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:14.436: INFO: stderr: "" Jul 1 10:49:14.436: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:14.436: INFO: validating pod update-demo-nautilus-cpg9s Jul 1 10:49:14.441: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:14.441: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:14.441: INFO: update-demo-nautilus-cpg9s is verified up and running Jul 1 10:49:14.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p5mzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:14.529: INFO: stderr: "" Jul 1 10:49:14.529: INFO: stdout: "" Jul 1 10:49:14.529: INFO: update-demo-nautilus-p5mzg is created but not running Jul 1 10:49:19.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:19.658: INFO: stderr: "" Jul 1 10:49:19.658: INFO: stdout: "update-demo-nautilus-cpg9s update-demo-nautilus-p5mzg " Jul 1 10:49:19.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:19.770: INFO: stderr: "" Jul 1 10:49:19.770: INFO: stdout: "true" Jul 1 10:49:19.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpg9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:19.887: INFO: stderr: "" Jul 1 10:49:19.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:19.887: INFO: validating pod update-demo-nautilus-cpg9s Jul 1 10:49:19.891: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:19.891: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:19.891: INFO: update-demo-nautilus-cpg9s is verified up and running Jul 1 10:49:19.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p5mzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:19.991: INFO: stderr: "" Jul 1 10:49:19.991: INFO: stdout: "true" Jul 1 10:49:19.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p5mzg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:20.061: INFO: stderr: "" Jul 1 10:49:20.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 10:49:20.061: INFO: validating pod update-demo-nautilus-p5mzg Jul 1 10:49:20.065: INFO: got data: { "image": "nautilus.jpg" } Jul 1 10:49:20.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 10:49:20.065: INFO: update-demo-nautilus-p5mzg is verified up and running STEP: using delete to clean up resources Jul 1 10:49:20.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:20.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 10:49:20.150: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 10:49:20.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2dvl5' Jul 1 10:49:20.247: INFO: stderr: "No resources found.\n" Jul 1 10:49:20.247: INFO: stdout: "" Jul 1 10:49:20.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2dvl5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 10:49:20.316: INFO: stderr: "" Jul 1 10:49:20.316: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:49:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2dvl5" for this suite. Jul 1 10:49:44.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:49:44.513: INFO: namespace: e2e-tests-kubectl-2dvl5, resource: bindings, ignored listing per whitelist Jul 1 10:49:44.544: INFO: namespace e2e-tests-kubectl-2dvl5 deletion completed in 24.22426721s • [SLOW TEST:44.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:49:44.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 10:49:44.653: INFO: Waiting up to 5m0s for pod "pod-efee8125-9bed-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-zh77r" to be "success or failure" Jul 1 10:49:44.658: INFO: Pod "pod-efee8125-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429595ms Jul 1 10:49:46.663: INFO: Pod "pod-efee8125-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010312717s Jul 1 10:49:48.669: INFO: Pod "pod-efee8125-9bed-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015691824s STEP: Saw pod success Jul 1 10:49:48.669: INFO: Pod "pod-efee8125-9bed-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:49:48.672: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-efee8125-9bed-11e9-9f49-0242ac110006 container test-container: STEP: delete the pod Jul 1 10:49:48.791: INFO: Waiting for pod pod-efee8125-9bed-11e9-9f49-0242ac110006 to disappear Jul 1 10:49:48.824: INFO: Pod pod-efee8125-9bed-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:49:48.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zh77r" for this suite. Jul 1 10:49:54.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:49:55.008: INFO: namespace: e2e-tests-emptydir-zh77r, resource: bindings, ignored listing per whitelist Jul 1 10:49:55.014: INFO: namespace e2e-tests-emptydir-zh77r deletion completed in 6.185251634s • [SLOW TEST:10.470 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:49:55.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f6290037-9bed-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 10:49:55.124: INFO: Waiting up to 5m0s for pod "pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-rtxrq" to be "success or failure" Jul 1 10:49:55.141: INFO: Pod "pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.81354ms Jul 1 10:49:57.144: INFO: Pod "pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020103313s Jul 1 10:49:59.523: INFO: Pod "pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.398754289s STEP: Saw pod success Jul 1 10:49:59.523: INFO: Pod "pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:49:59.528: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006 container secret-volume-test: STEP: delete the pod Jul 1 10:49:59.603: INFO: Waiting for pod pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006 to disappear Jul 1 10:49:59.615: INFO: Pod pod-secrets-f62b0930-9bed-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:49:59.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rtxrq" for this suite. Jul 1 10:50:05.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:50:05.719: INFO: namespace: e2e-tests-secrets-rtxrq, resource: bindings, ignored listing per whitelist Jul 1 10:50:05.791: INFO: namespace e2e-tests-secrets-rtxrq deletion completed in 6.115092565s • [SLOW TEST:10.777 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:50:05.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gfsmf Jul 1 10:50:09.960: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gfsmf STEP: checking the pod's current state and verifying that restartCount is present Jul 1 10:50:09.963: INFO: Initial restart count of pod liveness-http is 0 Jul 1 10:50:32.018: INFO: Restart count of pod e2e-tests-container-probe-gfsmf/liveness-http is now 1 (22.054906287s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:50:32.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gfsmf" for this suite. Jul 1 10:50:38.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:50:38.093: INFO: namespace: e2e-tests-container-probe-gfsmf, resource: bindings, ignored listing per whitelist Jul 1 10:50:38.135: INFO: namespace e2e-tests-container-probe-gfsmf deletion completed in 6.089809022s • [SLOW TEST:32.344 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:50:38.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jul 1 10:50:42.228: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0fd9eeb7-9bee-11e9-9f49-0242ac110006", GenerateName:"", Namespace:"e2e-tests-pods-gvm7h", SelfLink:"/api/v1/namespaces/e2e-tests-pods-gvm7h/pods/pod-submit-remove-0fd9eeb7-9bee-11e9-9f49-0242ac110006", UID:"0fda960f-9bee-11e9-a678-fa163e0cec1d", ResourceVersion:"1840358", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63697575038, loc:(*time.Location)(0x7947a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"197321758"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-92n4d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c70940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-92n4d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b8aea8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-x6tdbol33slm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c6a900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b8aee0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b8af00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b8af08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b8af0c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697575038, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697575040, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697575040, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697575038, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.12", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0020ac380), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0020ac3a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://a0eec440f9985cb8d6783e14ade2c0098be0d346a9490c3dd9d280f635ae3013"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:50:55.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gvm7h" for this suite. Jul 1 10:51:01.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:51:01.901: INFO: namespace: e2e-tests-pods-gvm7h, resource: bindings, ignored listing per whitelist Jul 1 10:51:01.960: INFO: namespace e2e-tests-pods-gvm7h deletion completed in 6.140815174s • [SLOW TEST:23.825 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:51:01.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4hnb8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4hnb8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:51:06.157: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.165: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.168: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.171: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.173: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.176: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.179: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.182: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.184: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.187: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.201: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.204: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.206: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.208: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.211: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006) Jul 1 10:51:06.211: INFO: Lookups using e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4hnb8.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jul 1 10:51:11.292: INFO: DNS probes using e2e-tests-dns-4hnb8/dns-test-1e1b3b5f-9bee-11e9-9f49-0242ac110006 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:51:11.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-4hnb8" for this suite. Jul 1 10:51:17.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:51:17.526: INFO: namespace: e2e-tests-dns-4hnb8, resource: bindings, ignored listing per whitelist Jul 1 10:51:17.579: INFO: namespace e2e-tests-dns-4hnb8 deletion completed in 6.156754381s • [SLOW TEST:15.619 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:51:17.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bnhlf [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jul 1 10:51:17.708: INFO: Found 0 stateful pods, waiting for 3 Jul 1 10:51:27.712: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 10:51:27.712: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 10:51:27.712: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 1 10:51:27.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnhlf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 10:51:28.066: INFO: stderr: "" Jul 1 10:51:28.066: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 10:51:28.066: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 1 10:51:38.110: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 1 10:51:48.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnhlf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 10:51:48.353: INFO: stderr: "" Jul 1 10:51:48.353: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 10:51:48.353: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 10:52:18.368: INFO: Waiting for StatefulSet e2e-tests-statefulset-bnhlf/ss2 to complete update STEP: Rolling back to a previous revision Jul 1 10:52:28.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnhlf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 10:52:28.646: INFO: stderr: "" Jul 1 10:52:28.646: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 10:52:28.646: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 10:52:38.689: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 1 10:52:48.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bnhlf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 10:52:49.114: INFO: stderr: "" Jul 1 10:52:49.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 10:52:49.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 10:52:59.138: INFO: Waiting for StatefulSet e2e-tests-statefulset-bnhlf/ss2 to complete update Jul 1 10:52:59.138: INFO: Waiting for Pod e2e-tests-statefulset-bnhlf/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 Jul 1 10:52:59.138: INFO: Waiting for Pod e2e-tests-statefulset-bnhlf/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9 Jul 1 10:53:09.146: INFO: Waiting for StatefulSet e2e-tests-statefulset-bnhlf/ss2 to complete update Jul 1 10:53:09.146: INFO: Waiting for Pod e2e-tests-statefulset-bnhlf/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 Jul 1 10:53:19.149: INFO: Waiting for StatefulSet e2e-tests-statefulset-bnhlf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 10:53:29.148: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bnhlf Jul 1 10:53:29.155: INFO: Scaling statefulset ss2 to 0 Jul 1 10:53:49.201: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 10:53:49.204: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:53:49.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bnhlf" for this suite. Jul 1 10:53:57.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:53:57.253: INFO: namespace: e2e-tests-statefulset-bnhlf, resource: bindings, ignored listing per whitelist Jul 1 10:53:57.316: INFO: namespace e2e-tests-statefulset-bnhlf deletion completed in 8.090287933s • [SLOW TEST:159.737 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:53:57.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jul 1 10:53:57.498: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-xxt8m" to be "success or failure" Jul 1 10:53:57.501: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.133083ms Jul 1 10:53:59.534: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035585135s Jul 1 10:54:01.538: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040117584s STEP: Saw pod success Jul 1 10:54:01.538: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 1 10:54:01.571: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 1 10:54:01.628: INFO: Waiting for pod pod-host-path-test to disappear Jul 1 10:54:01.637: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:54:01.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-xxt8m" for this suite. Jul 1 10:54:07.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:54:07.732: INFO: namespace: e2e-tests-hostpath-xxt8m, resource: bindings, ignored listing per whitelist Jul 1 10:54:07.795: INFO: namespace e2e-tests-hostpath-xxt8m deletion completed in 6.15222576s • [SLOW TEST:10.479 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:54:07.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 10:54:07.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-9kr2r" to be "success or failure" Jul 1 10:54:07.945: INFO: Pod "downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041898ms Jul 1 10:54:09.950: INFO: Pod "downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013996635s Jul 1 10:54:11.957: INFO: Pod "downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020322481s STEP: Saw pod success Jul 1 10:54:11.957: INFO: Pod "downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:54:11.962: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006 container client-container: STEP: delete the pod Jul 1 10:54:11.994: INFO: Waiting for pod downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006 to disappear Jul 1 10:54:12.000: INFO: Pod downwardapi-volume-8cd2884d-9bee-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:54:12.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9kr2r" for this suite. Jul 1 10:54:18.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:54:18.055: INFO: namespace: e2e-tests-projected-9kr2r, resource: bindings, ignored listing per whitelist Jul 1 10:54:18.114: INFO: namespace e2e-tests-projected-9kr2r deletion completed in 6.094670955s • [SLOW TEST:10.319 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:54:18.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0701 10:54:19.348959 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 10:54:19.349: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:54:19.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dz775" for this suite. Jul 1 10:54:25.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:54:25.471: INFO: namespace: e2e-tests-gc-dz775, resource: bindings, ignored listing per whitelist Jul 1 10:54:25.521: INFO: namespace e2e-tests-gc-dz775 deletion completed in 6.167004457s • [SLOW TEST:7.408 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:54:25.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 10:54:25.636: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 1 10:54:25.679: INFO: Number of nodes with available pods: 0 Jul 1 10:54:25.679: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 1 10:54:25.753: INFO: Number of nodes with available pods: 0 Jul 1 10:54:25.754: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:26.759: INFO: Number of nodes with available pods: 0 Jul 1 10:54:26.759: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:27.810: INFO: Number of nodes with available pods: 0 Jul 1 10:54:27.810: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:28.756: INFO: Number of nodes with available pods: 1 Jul 1 10:54:28.757: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 1 10:54:28.845: INFO: Number of nodes with available pods: 1 Jul 1 10:54:28.845: INFO: Number of running nodes: 0, number of available pods: 1 Jul 1 10:54:29.851: INFO: Number of nodes with available pods: 0 Jul 1 10:54:29.852: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 1 10:54:29.867: INFO: Number of nodes with available pods: 0 Jul 1 10:54:29.868: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:30.875: INFO: Number of nodes with available pods: 0 Jul 1 10:54:30.875: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:31.873: INFO: Number of nodes with available pods: 0 Jul 1 10:54:31.873: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:32.871: INFO: Number of nodes with available pods: 0 Jul 1 10:54:32.871: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:33.873: INFO: Number of nodes with available pods: 0 Jul 1 10:54:33.873: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:34.890: INFO: Number of nodes with available pods: 0 Jul 1 10:54:34.890: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:35.872: INFO: Number of nodes with available pods: 1 Jul 1 10:54:35.873: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ssmpq, will wait for the garbage collector to delete the pods Jul 1 10:54:35.948: INFO: Deleting DaemonSet.extensions daemon-set took: 14.876787ms Jul 1 10:54:36.049: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.174868ms Jul 1 10:54:45.860: INFO: Number of nodes with available pods: 0 Jul 1 10:54:45.860: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 10:54:45.866: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ssmpq/daemonsets","resourceVersion":"1841197"},"items":null} Jul 1 10:54:45.873: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ssmpq/pods","resourceVersion":"1841197"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:54:45.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-ssmpq" for this suite. Jul 1 10:54:52.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:54:52.178: INFO: namespace: e2e-tests-daemonsets-ssmpq, resource: bindings, ignored listing per whitelist Jul 1 10:54:52.195: INFO: namespace e2e-tests-daemonsets-ssmpq deletion completed in 6.243281592s • [SLOW TEST:26.674 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:54:52.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 10:54:52.398: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 1 10:54:52.418: INFO: Number of nodes with available pods: 0 Jul 1 10:54:52.418: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:53.430: INFO: Number of nodes with available pods: 0 Jul 1 10:54:53.430: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:54.426: INFO: Number of nodes with available pods: 0 Jul 1 10:54:54.426: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:54:55.429: INFO: Number of nodes with available pods: 1 Jul 1 10:54:55.429: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 1 10:54:55.503: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:54:56.517: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:54:57.515: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:54:58.514: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:54:58.514: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:54:59.516: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:54:59.516: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:00.517: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:00.517: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:01.516: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:01.516: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:02.516: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:02.516: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:03.519: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:03.519: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:04.517: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:04.517: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:05.516: INFO: Wrong image for pod: daemon-set-dbn94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 1 10:55:05.516: INFO: Pod daemon-set-dbn94 is not available Jul 1 10:55:06.516: INFO: Pod daemon-set-p5g7b is not available STEP: Check that daemon pods are still running on every node of the cluster. Jul 1 10:55:06.529: INFO: Number of nodes with available pods: 0 Jul 1 10:55:06.529: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:55:07.541: INFO: Number of nodes with available pods: 0 Jul 1 10:55:07.541: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jul 1 10:55:08.539: INFO: Number of nodes with available pods: 1 Jul 1 10:55:08.539: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6rcrg, will wait for the garbage collector to delete the pods Jul 1 10:55:08.644: INFO: Deleting DaemonSet.extensions daemon-set took: 17.576202ms Jul 1 10:55:08.744: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.175436ms Jul 1 10:55:12.348: INFO: Number of nodes with available pods: 0 Jul 1 10:55:12.348: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 10:55:12.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6rcrg/daemonsets","resourceVersion":"1841293"},"items":null} Jul 1 10:55:12.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6rcrg/pods","resourceVersion":"1841293"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:55:12.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6rcrg" for this suite. Jul 1 10:55:18.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:55:18.456: INFO: namespace: e2e-tests-daemonsets-6rcrg, resource: bindings, ignored listing per whitelist Jul 1 10:55:18.554: INFO: namespace e2e-tests-daemonsets-6rcrg deletion completed in 6.180021444s • [SLOW TEST:26.358 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:55:18.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 10:55:18.617: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:55:22.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kvjwf" for this suite. Jul 1 10:56:08.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:56:09.015: INFO: namespace: e2e-tests-pods-kvjwf, resource: bindings, ignored listing per whitelist Jul 1 10:56:09.035: INFO: namespace e2e-tests-pods-kvjwf deletion completed in 46.17695062s • [SLOW TEST:50.481 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:56:09.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d52261a6-9bee-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 10:56:09.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-29txh" to be "success or failure" Jul 1 10:56:09.209: INFO: Pod "pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.454858ms Jul 1 10:56:11.212: INFO: Pod "pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017415376s Jul 1 10:56:13.215: INFO: Pod "pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020455542s STEP: Saw pod success Jul 1 10:56:13.215: INFO: Pod "pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:56:13.218: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006 container configmap-volume-test: STEP: delete the pod Jul 1 10:56:13.238: INFO: Waiting for pod pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006 to disappear Jul 1 10:56:13.241: INFO: Pod pod-configmaps-d522dbea-9bee-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:56:13.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-29txh" for this suite. Jul 1 10:56:19.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:56:19.272: INFO: namespace: e2e-tests-configmap-29txh, resource: bindings, ignored listing per whitelist Jul 1 10:56:19.404: INFO: namespace e2e-tests-configmap-29txh deletion completed in 6.159170006s • [SLOW TEST:10.369 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:56:19.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 10:56:19.510: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:56:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-75rb2" for this suite. Jul 1 10:56:29.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:56:29.653: INFO: namespace: e2e-tests-init-container-75rb2, resource: bindings, ignored listing per whitelist Jul 1 10:56:29.658: INFO: namespace e2e-tests-init-container-75rb2 deletion completed in 6.162757573s • [SLOW TEST:10.254 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:56:29.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 10:56:29.795: INFO: Waiting up to 5m0s for pod "downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-l97kt" to be "success or failure" Jul 1 10:56:29.825: INFO: Pod "downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 29.733974ms Jul 1 10:56:31.832: INFO: Pod "downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037467666s Jul 1 10:56:33.840: INFO: Pod "downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045093588s STEP: Saw pod success Jul 1 10:56:33.840: INFO: Pod "downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:56:33.845: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006 container dapi-container: STEP: delete the pod Jul 1 10:56:33.923: INFO: Waiting for pod downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006 to disappear Jul 1 10:56:33.934: INFO: Pod downward-api-e169bbd1-9bee-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:56:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l97kt" for this suite. Jul 1 10:56:39.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:56:40.073: INFO: namespace: e2e-tests-downward-api-l97kt, resource: bindings, ignored listing per whitelist Jul 1 10:56:40.089: INFO: namespace e2e-tests-downward-api-l97kt deletion completed in 6.149262235s • [SLOW TEST:10.430 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:56:40.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-e7a390fd-9bee-11e9-9f49-0242ac110006 STEP: Creating secret with name s-test-opt-upd-e7a3914e-9bee-11e9-9f49-0242ac110006 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e7a390fd-9bee-11e9-9f49-0242ac110006 STEP: Updating secret s-test-opt-upd-e7a3914e-9bee-11e9-9f49-0242ac110006 STEP: Creating secret with name s-test-opt-create-e7a39169-9bee-11e9-9f49-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:56:48.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-frw84" for this suite. Jul 1 10:57:12.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:57:12.642: INFO: namespace: e2e-tests-projected-frw84, resource: bindings, ignored listing per whitelist Jul 1 10:57:12.652: INFO: namespace e2e-tests-projected-frw84 deletion completed in 24.175092043s • [SLOW TEST:32.563 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:57:12.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 10:57:12.738: INFO: Waiting up to 5m0s for pod "pod-fb0270d0-9bee-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-g2q9x" to be "success or failure" Jul 1 10:57:12.747: INFO: Pod "pod-fb0270d0-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508901ms Jul 1 10:57:14.866: INFO: Pod "pod-fb0270d0-9bee-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127849098s Jul 1 10:57:16.869: INFO: Pod "pod-fb0270d0-9bee-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130698431s STEP: Saw pod success Jul 1 10:57:16.869: INFO: Pod "pod-fb0270d0-9bee-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:57:16.871: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-fb0270d0-9bee-11e9-9f49-0242ac110006 container test-container: STEP: delete the pod Jul 1 10:57:16.941: INFO: Waiting for pod pod-fb0270d0-9bee-11e9-9f49-0242ac110006 to disappear Jul 1 10:57:16.948: INFO: Pod pod-fb0270d0-9bee-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:57:16.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g2q9x" for this suite. Jul 1 10:57:22.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:57:23.072: INFO: namespace: e2e-tests-emptydir-g2q9x, resource: bindings, ignored listing per whitelist Jul 1 10:57:23.121: INFO: namespace e2e-tests-emptydir-g2q9x deletion completed in 6.168980386s • [SLOW TEST:10.469 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:57:23.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:57:27.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-494dv" for this suite. Jul 1 10:58:19.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:58:19.387: INFO: namespace: e2e-tests-kubelet-test-494dv, resource: bindings, ignored listing per whitelist Jul 1 10:58:19.453: INFO: namespace e2e-tests-kubelet-test-494dv deletion completed in 52.176859916s • [SLOW TEST:56.332 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:58:19.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:58:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-znq4v" for this suite. Jul 1 10:58:29.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:58:29.988: INFO: namespace: e2e-tests-emptydir-wrapper-znq4v, resource: bindings, ignored listing per whitelist Jul 1 10:58:30.020: INFO: namespace e2e-tests-emptydir-wrapper-znq4v deletion completed in 6.21929998s • [SLOW TEST:10.567 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:58:30.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 1 10:58:30.142: INFO: Waiting up to 5m0s for pod "downward-api-2924f016-9bef-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-k7mwz" to be "success or failure" Jul 1 10:58:30.189: INFO: Pod "downward-api-2924f016-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 47.735499ms Jul 1 10:58:32.195: INFO: Pod "downward-api-2924f016-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053413201s Jul 1 10:58:34.201: INFO: Pod "downward-api-2924f016-9bef-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059698365s STEP: Saw pod success Jul 1 10:58:34.201: INFO: Pod "downward-api-2924f016-9bef-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:58:34.204: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-2924f016-9bef-11e9-9f49-0242ac110006 container dapi-container: STEP: delete the pod Jul 1 10:58:34.272: INFO: Waiting for pod downward-api-2924f016-9bef-11e9-9f49-0242ac110006 to disappear Jul 1 10:58:34.277: INFO: Pod downward-api-2924f016-9bef-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:58:34.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k7mwz" for this suite. Jul 1 10:58:40.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:58:40.444: INFO: namespace: e2e-tests-downward-api-k7mwz, resource: bindings, ignored listing per whitelist Jul 1 10:58:40.444: INFO: namespace e2e-tests-downward-api-k7mwz deletion completed in 6.163105165s • [SLOW TEST:10.424 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:58:40.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2f6b23ab-9bef-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 10:58:40.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-h97qd" to be "success or failure" Jul 1 10:58:40.690: INFO: Pod "pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.574729ms Jul 1 10:58:42.695: INFO: Pod "pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025000733s Jul 1 10:58:44.702: INFO: Pod "pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031409991s STEP: Saw pod success Jul 1 10:58:44.702: INFO: Pod "pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:58:44.706: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006 container configmap-volume-test: STEP: delete the pod Jul 1 10:58:44.773: INFO: Waiting for pod pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006 to disappear Jul 1 10:58:44.776: INFO: Pod pod-configmaps-2f6bdce6-9bef-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:58:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-h97qd" for this suite. Jul 1 10:58:50.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:58:50.984: INFO: namespace: e2e-tests-configmap-h97qd, resource: bindings, ignored listing per whitelist Jul 1 10:58:51.014: INFO: namespace e2e-tests-configmap-h97qd deletion completed in 6.233544394s • [SLOW TEST:10.569 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:58:51.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jul 1 10:58:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-gkktj run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 1 10:58:55.719: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Jul 1 10:58:55.719: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:58:57.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gkktj" for this suite. Jul 1 10:59:07.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:59:07.792: INFO: namespace: e2e-tests-kubectl-gkktj, resource: bindings, ignored listing per whitelist Jul 1 10:59:07.855: INFO: namespace e2e-tests-kubectl-gkktj deletion completed in 10.124732124s • [SLOW TEST:16.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:59:07.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-xl62z;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xl62z.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.9.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.9.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.9.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.9.244_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-xl62z;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-xl62z;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-xl62z.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-xl62z.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xl62z.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.9.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.9.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.9.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.9.244_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 10:59:12.192: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.195: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.199: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-xl62z from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.201: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.205: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.209: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.217: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.220: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.223: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.226: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.229: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.231: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.235: INFO: Unable to read 10.111.9.244_udp@PTR from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.238: INFO: Unable to read 10.111.9.244_tcp@PTR from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.243: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.246: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.251: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-xl62z from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.255: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-xl62z from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.259: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.264: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.270: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.274: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.277: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.280: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.283: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.286: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.289: INFO: Unable to read 10.111.9.244_udp@PTR from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.292: INFO: Unable to read 10.111.9.244_tcp@PTR from pod e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006: the server could not find the requested resource (get pods dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006) Jul 1 10:59:12.292: INFO: Lookups using e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-xl62z wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z wheezy_udp@dns-test-service.e2e-tests-dns-xl62z.svc wheezy_tcp@dns-test-service.e2e-tests-dns-xl62z.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.111.9.244_udp@PTR 10.111.9.244_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-xl62z jessie_tcp@dns-test-service.e2e-tests-dns-xl62z jessie_udp@dns-test-service.e2e-tests-dns-xl62z.svc jessie_tcp@dns-test-service.e2e-tests-dns-xl62z.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-xl62z.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-xl62z.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.111.9.244_udp@PTR 10.111.9.244_tcp@PTR] Jul 1 10:59:17.360: INFO: DNS probes using e2e-tests-dns-xl62z/dns-test-3fc9bd25-9bef-11e9-9f49-0242ac110006 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:59:17.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-xl62z" for this suite. Jul 1 10:59:24.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:59:24.142: INFO: namespace: e2e-tests-dns-xl62z, resource: bindings, ignored listing per whitelist Jul 1 10:59:24.255: INFO: namespace e2e-tests-dns-xl62z deletion completed in 6.326456883s • [SLOW TEST:16.401 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:59:24.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jul 1 10:59:24.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jul 1 10:59:24.427: INFO: stderr: "" Jul 1 10:59:24.427: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.4:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:59:24.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j4c85" for this suite. Jul 1 10:59:30.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:59:30.592: INFO: namespace: e2e-tests-kubectl-j4c85, resource: bindings, ignored listing per whitelist Jul 1 10:59:30.614: INFO: namespace e2e-tests-kubectl-j4c85 deletion completed in 6.183641647s • [SLOW TEST:6.359 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:59:30.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4d472e19-9bef-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 10:59:30.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-mv7s6" to be "success or failure" Jul 1 10:59:30.778: INFO: Pod "pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.171192ms Jul 1 10:59:32.782: INFO: Pod "pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016008321s Jul 1 10:59:34.786: INFO: Pod "pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020438414s STEP: Saw pod success Jul 1 10:59:34.787: INFO: Pod "pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 10:59:34.790: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006 container configmap-volume-test: STEP: delete the pod Jul 1 10:59:34.832: INFO: Waiting for pod pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006 to disappear Jul 1 10:59:34.839: INFO: Pod pod-configmaps-4d487773-9bef-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 10:59:34.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mv7s6" for this suite. Jul 1 10:59:40.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 10:59:41.016: INFO: namespace: e2e-tests-configmap-mv7s6, resource: bindings, ignored listing per whitelist Jul 1 10:59:41.049: INFO: namespace e2e-tests-configmap-mv7s6 deletion completed in 6.206858672s • [SLOW TEST:10.435 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 10:59:41.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-f995p [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-f995p STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-f995p Jul 1 10:59:41.232: INFO: Found 0 stateful pods, waiting for 1 Jul 1 10:59:51.237: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 1 10:59:51.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 10:59:51.579: INFO: stderr: "" Jul 1 10:59:51.579: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 10:59:51.579: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 10:59:51.582: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 11:00:01.588: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 11:00:01.588: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 11:00:01.613: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:01.613: INFO: ss-0 hunter-server-x6tdbol33slm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:01.613: INFO: Jul 1 11:00:01.613: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 1 11:00:02.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990889048s Jul 1 11:00:03.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983491663s Jul 1 11:00:04.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.966652485s Jul 1 11:00:05.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960075843s Jul 1 11:00:06.654: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.955538381s Jul 1 11:00:07.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9492722s Jul 1 11:00:08.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.942452024s Jul 1 11:00:09.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.937156027s Jul 1 11:00:10.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.040101ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-f995p Jul 1 11:00:11.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:12.050: INFO: stderr: "" Jul 1 11:00:12.050: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 11:00:12.050: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 11:00:12.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:12.311: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Jul 1 11:00:12.311: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 11:00:12.311: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 11:00:12.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:12.598: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n" Jul 1 11:00:12.598: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 1 11:00:12.598: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 1 11:00:12.603: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:00:12.604: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:00:12.604: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 1 11:00:12.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 11:00:12.912: INFO: stderr: "" Jul 1 11:00:12.912: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 11:00:12.912: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 11:00:12.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 11:00:13.209: INFO: stderr: "" Jul 1 11:00:13.209: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 11:00:13.209: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 11:00:13.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 1 11:00:13.479: INFO: stderr: "" Jul 1 11:00:13.479: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 1 11:00:13.479: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 1 11:00:13.479: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 11:00:13.483: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 11:00:23.495: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 11:00:23.495: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 11:00:23.495: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 11:00:23.548: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:23.548: INFO: ss-0 hunter-server-x6tdbol33slm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:23.548: INFO: ss-1 hunter-server-x6tdbol33slm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:23.548: INFO: ss-2 hunter-server-x6tdbol33slm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:23.548: INFO: Jul 1 11:00:23.548: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 11:00:24.565: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:24.565: INFO: ss-0 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:24.565: INFO: ss-1 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:24.566: INFO: ss-2 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:24.566: INFO: Jul 1 11:00:24.566: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 11:00:25.572: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:25.572: INFO: ss-0 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:25.572: INFO: ss-1 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:25.572: INFO: ss-2 hunter-server-x6tdbol33slm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:25.572: INFO: Jul 1 11:00:25.572: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 11:00:26.575: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:26.575: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:26.575: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:26.575: INFO: Jul 1 11:00:26.575: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:27.578: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:27.578: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:27.578: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:27.578: INFO: Jul 1 11:00:27.578: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:28.582: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:28.582: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:28.582: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:28.582: INFO: Jul 1 11:00:28.582: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:29.588: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:29.588: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:29.588: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:29.588: INFO: Jul 1 11:00:29.588: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:30.637: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:30.637: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:30.637: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:30.637: INFO: Jul 1 11:00:30.637: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:31.652: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:31.652: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:31.652: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:31.652: INFO: Jul 1 11:00:31.652: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 11:00:32.657: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 11:00:32.658: INFO: ss-0 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 10:59:41 +0000 UTC }] Jul 1 11:00:32.658: INFO: ss-2 hunter-server-x6tdbol33slm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:00:01 +0000 UTC }] Jul 1 11:00:32.658: INFO: Jul 1 11:00:32.658: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-f995p Jul 1 11:00:33.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:33.807: INFO: rc: 1 Jul 1 11:00:33.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001cfe660 exit status 1 true [0xc001148548 0xc001148560 0xc001148578] [0xc001148548 0xc001148560 0xc001148578] [0xc001148558 0xc001148570] [0x9333e0 0x9333e0] 0xc002146cc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jul 1 11:00:43.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:43.903: INFO: rc: 1 Jul 1 11:00:43.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001cfe780 exit status 1 true [0xc001148580 0xc001148598 0xc0011485b0] [0xc001148580 0xc001148598 0xc0011485b0] [0xc001148590 0xc0011485a8] [0x9333e0 0x9333e0] 0xc002147020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:00:53.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:00:54.005: INFO: rc: 1 Jul 1 11:00:54.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f1c2a0 exit status 1 true [0xc000aa8cd0 0xc000aa8ce8 0xc000aa8d00] [0xc000aa8cd0 0xc000aa8ce8 0xc000aa8d00] [0xc000aa8ce0 0xc000aa8cf8] [0x9333e0 0x9333e0] 0xc001fe6180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:04.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:04.099: INFO: rc: 1 Jul 1 11:01:04.099: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5c80 exit status 1 true [0xc000a93548 0xc000a93580 0xc000a935d0] [0xc000a93548 0xc000a93580 0xc000a935d0] [0xc000a93570 0xc000a935a8] [0x9333e0 0x9333e0] 0xc00214cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:14.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:14.194: INFO: rc: 1 Jul 1 11:01:14.194: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d900f0 exit status 1 true [0xc00059a140 0xc00059a328 0xc00059a3d8] [0xc00059a140 0xc00059a328 0xc00059a3d8] [0xc00059a300 0xc00059a3d0] [0x9333e0 0x9333e0] 0xc000e4e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:24.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:24.305: INFO: rc: 1 Jul 1 11:01:24.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bfc240 exit status 1 true [0xc00016a000 0xc00016a130 0xc00016a1c0] [0xc00016a000 0xc00016a130 0xc00016a1c0] [0xc00016a0e8 0xc00016a1b0] [0x9333e0 0x9333e0] 0xc001efa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:34.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:34.412: INFO: rc: 1 Jul 1 11:01:34.412: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001002150 exit status 1 true [0xc001148008 0xc001148020 0xc001148048] [0xc001148008 0xc001148020 0xc001148048] [0xc001148018 0xc001148030] [0x9333e0 0x9333e0] 0xc001922780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:44.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:44.532: INFO: rc: 1 Jul 1 11:01:44.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d90360 exit status 1 true [0xc00059a3f0 0xc00059a4e0 0xc00059a570] [0xc00059a3f0 0xc00059a4e0 0xc00059a570] [0xc00059a460 0xc00059a560] [0x9333e0 0x9333e0] 0xc000e4e600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:01:54.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:01:54.672: INFO: rc: 1 Jul 1 11:01:54.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d904b0 exit status 1 true [0xc00059a590 0xc00059a608 0xc00059a688] [0xc00059a590 0xc00059a608 0xc00059a688] [0xc00059a5f8 0xc00059a640] [0x9333e0 0x9333e0] 0xc000e4e960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:04.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:04.789: INFO: rc: 1 Jul 1 11:02:04.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d90660 exit status 1 true [0xc00059a6b0 0xc00059a740 0xc00059a7d8] [0xc00059a6b0 0xc00059a740 0xc00059a7d8] [0xc00059a708 0xc00059a7a8] [0x9333e0 0x9333e0] 0xc0017f40c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:14.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:14.889: INFO: rc: 1 Jul 1 11:02:14.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001002270 exit status 1 true [0xc001148050 0xc001148068 0xc001148080] [0xc001148050 0xc001148068 0xc001148080] [0xc001148060 0xc001148078] [0x9333e0 0x9333e0] 0xc001922e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:24.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:25.004: INFO: rc: 1 Jul 1 11:02:25.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d90810 exit status 1 true [0xc00059a7f0 0xc00059a8a0 0xc00059a990] [0xc00059a7f0 0xc00059a8a0 0xc00059a990] [0xc00059a868 0xc00059a8c0] [0x9333e0 0x9333e0] 0xc0017f5380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:35.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:35.098: INFO: rc: 1 Jul 1 11:02:35.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001636120 exit status 1 true [0xc001dfc000 0xc001dfc018 0xc001dfc030] [0xc001dfc000 0xc001dfc018 0xc001dfc030] [0xc001dfc010 0xc001dfc028] [0x9333e0 0x9333e0] 0xc0017eb0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:45.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:45.193: INFO: rc: 1 Jul 1 11:02:45.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bfc3c0 exit status 1 true [0xc00016a1d0 0xc00016a318 0xc00016a4b0] [0xc00016a1d0 0xc00016a318 0xc00016a4b0] [0xc00016a278 0xc00016a460] [0x9333e0 0x9333e0] 0xc001efa6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:02:55.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:02:55.271: INFO: rc: 1 Jul 1 11:02:55.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bfc4b0 exit status 1 true [0xc00016a4f8 0xc00016aa28 0xc00016aac8] [0xc00016a4f8 0xc00016aa28 0xc00016aac8] [0xc00016a9f0 0xc00016aaa0] [0x9333e0 0x9333e0] 0xc001efaa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:05.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:05.370: INFO: rc: 1 Jul 1 11:03:05.370: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001bfc5a0 exit status 1 true [0xc00016aad8 0xc00016ab30 0xc00016ac98] [0xc00016aad8 0xc00016ab30 0xc00016ac98] [0xc00016ab00 0xc00016ac78] [0x9333e0 0x9333e0] 0xc001efade0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:15.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:15.466: INFO: rc: 1 Jul 1 11:03:15.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a20c0 exit status 1 true [0xc000aa8008 0xc000aa8020 0xc000aa8038] [0xc000aa8008 0xc000aa8020 0xc000aa8038] [0xc000aa8018 0xc000aa8030] [0x9333e0 0x9333e0] 0xc0014e7440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:25.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:25.585: INFO: rc: 1 Jul 1 11:03:25.586: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a21e0 exit status 1 true [0xc001148008 0xc001148020 0xc001148048] [0xc001148008 0xc001148020 0xc001148048] [0xc001148018 0xc001148030] [0x9333e0 0x9333e0] 0xc000e4e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:35.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:35.711: INFO: rc: 1 Jul 1 11:03:35.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016360f0 exit status 1 true [0xc000aa8040 0xc000aa8058 0xc000aa8070] [0xc000aa8040 0xc000aa8058 0xc000aa8070] [0xc000aa8050 0xc000aa8068] [0x9333e0 0x9333e0] 0xc001922780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:45.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:45.863: INFO: rc: 1 Jul 1 11:03:45.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001002180 exit status 1 true [0xc001dfc000 0xc001dfc018 0xc001dfc030] [0xc001dfc000 0xc001dfc018 0xc001dfc030] [0xc001dfc010 0xc001dfc028] [0x9333e0 0x9333e0] 0xc0017eafc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:03:55.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:03:55.983: INFO: rc: 1 Jul 1 11:03:55.983: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001636240 exit status 1 true [0xc000aa8078 0xc000aa8090 0xc000aa80a8] [0xc000aa8078 0xc000aa8090 0xc000aa80a8] [0xc000aa8088 0xc000aa80a0] [0x9333e0 0x9333e0] 0xc001922e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:05.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:06.082: INFO: rc: 1 Jul 1 11:04:06.083: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a2360 exit status 1 true [0xc001148050 0xc001148068 0xc001148080] [0xc001148050 0xc001148068 0xc001148080] [0xc001148060 0xc001148078] [0x9333e0 0x9333e0] 0xc000e4e600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:16.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:16.180: INFO: rc: 1 Jul 1 11:04:16.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010022a0 exit status 1 true [0xc001dfc038 0xc001dfc050 0xc001dfc068] [0xc001dfc038 0xc001dfc050 0xc001dfc068] [0xc001dfc048 0xc001dfc060] [0x9333e0 0x9333e0] 0xc0017eb4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:26.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:26.288: INFO: rc: 1 Jul 1 11:04:26.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a2450 exit status 1 true [0xc001148088 0xc0011480a0 0xc0011480b8] [0xc001148088 0xc0011480a0 0xc0011480b8] [0xc001148098 0xc0011480b0] [0x9333e0 0x9333e0] 0xc000e4e960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:36.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:36.430: INFO: rc: 1 Jul 1 11:04:36.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a2570 exit status 1 true [0xc0011480c0 0xc0011480d8 0xc0011480f0] [0xc0011480c0 0xc0011480d8 0xc0011480f0] [0xc0011480d0 0xc0011480e8] [0x9333e0 0x9333e0] 0xc0017f40c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:46.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:46.550: INFO: rc: 1 Jul 1 11:04:46.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d90150 exit status 1 true [0xc00059a130 0xc00059a300 0xc00059a3d0] [0xc00059a130 0xc00059a300 0xc00059a3d0] [0xc00059a238 0xc00059a3b0] [0x9333e0 0x9333e0] 0xc001efa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:04:56.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:04:56.663: INFO: rc: 1 Jul 1 11:04:56.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d90450 exit status 1 true [0xc00059a3d8 0xc00059a460 0xc00059a560] [0xc00059a3d8 0xc00059a460 0xc00059a560] [0xc00059a430 0xc00059a528] [0x9333e0 0x9333e0] 0xc001efa6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:05:06.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:05:06.778: INFO: rc: 1 Jul 1 11:05:06.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001636390 exit status 1 true [0xc000aa80b0 0xc000aa80c8 0xc000aa80e0] [0xc000aa80b0 0xc000aa80c8 0xc000aa80e0] [0xc000aa80c0 0xc000aa80d8] [0x9333e0 0x9333e0] 0xc0019235c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:05:16.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:05:16.880: INFO: rc: 1 Jul 1 11:05:16.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001636120 exit status 1 true [0xc000aa8008 0xc000aa8020 0xc000aa8038] [0xc000aa8008 0xc000aa8020 0xc000aa8038] [0xc000aa8018 0xc000aa8030] [0x9333e0 0x9333e0] 0xc000e4e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:05:26.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:05:27.007: INFO: rc: 1 Jul 1 11:05:27.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d900f0 exit status 1 true [0xc00059a130 0xc00059a300 0xc00059a3d0] [0xc00059a130 0xc00059a300 0xc00059a3d0] [0xc00059a238 0xc00059a3b0] [0x9333e0 0x9333e0] 0xc0014e7440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 1 11:05:37.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f995p ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 1 11:05:37.124: INFO: rc: 1 Jul 1 11:05:37.124: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jul 1 11:05:37.124: INFO: Scaling statefulset ss to 0 Jul 1 11:05:37.139: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 11:05:37.141: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f995p Jul 1 11:05:37.143: INFO: Scaling statefulset ss to 0 Jul 1 11:05:37.150: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 11:05:37.152: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:05:37.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-f995p" for this suite. Jul 1 11:05:43.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:05:43.293: INFO: namespace: e2e-tests-statefulset-f995p, resource: bindings, ignored listing per whitelist Jul 1 11:05:43.334: INFO: namespace e2e-tests-statefulset-f995p deletion completed in 6.141974982s • [SLOW TEST:362.284 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:05:43.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:05:43.465: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jul 1 11:05:43.471: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7t9ws/daemonsets","resourceVersion":"1842638"},"items":null} Jul 1 11:05:43.473: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7t9ws/pods","resourceVersion":"1842638"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:05:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-7t9ws" for this suite. Jul 1 11:05:49.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:05:49.595: INFO: namespace: e2e-tests-daemonsets-7t9ws, resource: bindings, ignored listing per whitelist Jul 1 11:05:49.639: INFO: namespace e2e-tests-daemonsets-7t9ws deletion completed in 6.151020244s S [SKIPPING] [6.305 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:05:43.465: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:05:49.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v2c9n STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 11:05:49.854: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 11:06:08.025: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-v2c9n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:06:08.025: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:06:09.217: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:06:09.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-v2c9n" for this suite. Jul 1 11:06:33.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:06:33.342: INFO: namespace: e2e-tests-pod-network-test-v2c9n, resource: bindings, ignored listing per whitelist Jul 1 11:06:33.356: INFO: namespace e2e-tests-pod-network-test-v2c9n deletion completed in 24.134857282s • [SLOW TEST:43.716 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:06:33.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7dqcn [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-7dqcn STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-7dqcn STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-7dqcn STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-7dqcn Jul 1 11:06:37.556: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7dqcn, name: ss-0, uid: 4ad36614-9bf0-11e9-a678-fa163e0cec1d, status phase: Pending. Waiting for statefulset controller to delete. Jul 1 11:06:45.727: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7dqcn, name: ss-0, uid: 4ad36614-9bf0-11e9-a678-fa163e0cec1d, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 11:06:45.829: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7dqcn, name: ss-0, uid: 4ad36614-9bf0-11e9-a678-fa163e0cec1d, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 11:06:45.900: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-7dqcn STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-7dqcn STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-7dqcn and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 11:06:56.050: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7dqcn Jul 1 11:06:56.053: INFO: Scaling statefulset ss to 0 Jul 1 11:07:06.072: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 11:07:06.076: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:07:06.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7dqcn" for this suite. Jul 1 11:07:12.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:07:12.331: INFO: namespace: e2e-tests-statefulset-7dqcn, resource: bindings, ignored listing per whitelist Jul 1 11:07:12.348: INFO: namespace e2e-tests-statefulset-7dqcn deletion completed in 6.237012767s • [SLOW TEST:38.992 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:07:12.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 11:07:17.041: INFO: Successfully updated pod "annotationupdate607a3bb7-9bf0-11e9-9f49-0242ac110006" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:07:19.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5p6dm" for this suite. Jul 1 11:07:41.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:07:41.596: INFO: namespace: e2e-tests-downward-api-5p6dm, resource: bindings, ignored listing per whitelist Jul 1 11:07:41.610: INFO: namespace e2e-tests-downward-api-5p6dm deletion completed in 22.544987018s • [SLOW TEST:29.262 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:07:41.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 1 11:07:41.769: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1842987,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 11:07:41.769: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1842987,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 1 11:07:51.782: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843000,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 11:07:51.782: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843000,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 1 11:08:01.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843013,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 11:08:01.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843013,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 1 11:08:11.803: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843026,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 11:08:11.803: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-a,UID:71f16373-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843026,Generation:0,CreationTimestamp:2019-07-01 11:07:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 1 11:08:21.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-b,UID:89cef206-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843039,Generation:0,CreationTimestamp:2019-07-01 11:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 11:08:21.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-b,UID:89cef206-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843039,Generation:0,CreationTimestamp:2019-07-01 11:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 1 11:08:31.860: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-b,UID:89cef206-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843052,Generation:0,CreationTimestamp:2019-07-01 11:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 11:08:31.860: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-n695c,SelfLink:/api/v1/namespaces/e2e-tests-watch-n695c/configmaps/e2e-watch-test-configmap-b,UID:89cef206-9bf0-11e9-a678-fa163e0cec1d,ResourceVersion:1843052,Generation:0,CreationTimestamp:2019-07-01 11:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:08:41.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-n695c" for this suite. Jul 1 11:08:47.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:08:47.988: INFO: namespace: e2e-tests-watch-n695c, resource: bindings, ignored listing per whitelist Jul 1 11:08:48.027: INFO: namespace e2e-tests-watch-n695c deletion completed in 6.154707971s • [SLOW TEST:66.417 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:08:48.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 11:08:56.263: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:08:56.330: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 11:08:58.330: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:08:58.333: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 11:09:00.331: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:09:00.335: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 11:09:02.331: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:09:02.336: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 11:09:04.331: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:09:04.337: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 11:09:06.331: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 11:09:06.337: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:09:06.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kplbz" for this suite. Jul 1 11:09:28.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:09:28.415: INFO: namespace: e2e-tests-container-lifecycle-hook-kplbz, resource: bindings, ignored listing per whitelist Jul 1 11:09:28.435: INFO: namespace e2e-tests-container-lifecycle-hook-kplbz deletion completed in 22.081505862s • [SLOW TEST:40.408 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:09:28.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b19508ad-9bf0-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:09:28.619: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-fpxlf" to be "success or failure" Jul 1 11:09:28.625: INFO: Pod "pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143729ms Jul 1 11:09:30.659: INFO: Pod "pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040483106s Jul 1 11:09:32.666: INFO: Pod "pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047133092s STEP: Saw pod success Jul 1 11:09:32.666: INFO: Pod "pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:09:32.670: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jul 1 11:09:32.704: INFO: Waiting for pod pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006 to disappear Jul 1 11:09:32.719: INFO: Pod pod-projected-configmaps-b195a9a7-9bf0-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:09:32.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fpxlf" for this suite. Jul 1 11:09:38.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:09:38.853: INFO: namespace: e2e-tests-projected-fpxlf, resource: bindings, ignored listing per whitelist Jul 1 11:09:38.884: INFO: namespace e2e-tests-projected-fpxlf deletion completed in 6.133511955s • [SLOW TEST:10.448 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:09:38.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:09:39.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-l27l2" for this suite. Jul 1 11:09:45.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:09:45.116: INFO: namespace: e2e-tests-services-l27l2, resource: bindings, ignored listing per whitelist Jul 1 11:09:45.182: INFO: namespace e2e-tests-services-l27l2 deletion completed in 6.1144502s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.298 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:09:45.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 11:09:45.271: INFO: Waiting up to 5m0s for pod "pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-qfsdj" to be "success or failure" Jul 1 11:09:45.275: INFO: Pod "pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.536457ms Jul 1 11:09:47.278: INFO: Pod "pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006567949s Jul 1 11:09:49.283: INFO: Pod "pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011553821s STEP: Saw pod success Jul 1 11:09:49.283: INFO: Pod "pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:09:49.287: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006 container test-container: STEP: delete the pod Jul 1 11:09:49.349: INFO: Waiting for pod pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006 to disappear Jul 1 11:09:49.359: INFO: Pod pod-bb8ed0ef-9bf0-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:09:49.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qfsdj" for this suite. Jul 1 11:09:55.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:09:55.483: INFO: namespace: e2e-tests-emptydir-qfsdj, resource: bindings, ignored listing per whitelist Jul 1 11:09:55.523: INFO: namespace e2e-tests-emptydir-qfsdj deletion completed in 6.158529226s • [SLOW TEST:10.341 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:09:55.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 11:09:55.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-9pl4n" to be "success or failure" Jul 1 11:09:55.707: INFO: Pod "downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 62.139946ms Jul 1 11:09:57.713: INFO: Pod "downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067571449s Jul 1 11:09:59.719: INFO: Pod "downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074330634s STEP: Saw pod success Jul 1 11:09:59.719: INFO: Pod "downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:09:59.725: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006 container client-container: STEP: delete the pod Jul 1 11:09:59.769: INFO: Waiting for pod downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006 to disappear Jul 1 11:09:59.788: INFO: Pod downwardapi-volume-c1bc2f99-9bf0-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:09:59.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9pl4n" for this suite. Jul 1 11:10:05.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:10:05.847: INFO: namespace: e2e-tests-downward-api-9pl4n, resource: bindings, ignored listing per whitelist Jul 1 11:10:05.950: INFO: namespace e2e-tests-downward-api-9pl4n deletion completed in 6.158723725s • [SLOW TEST:10.427 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:10:05.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jul 1 11:10:06.072: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:10:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cl69l" for this suite. Jul 1 11:10:12.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:10:12.232: INFO: namespace: e2e-tests-kubectl-cl69l, resource: bindings, ignored listing per whitelist Jul 1 11:10:12.284: INFO: namespace e2e-tests-kubectl-cl69l deletion completed in 6.130026147s • [SLOW TEST:6.333 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:10:12.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0701 11:10:22.390178 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 11:10:22.390: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:10:22.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kh8rs" for this suite. Jul 1 11:10:28.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:10:28.454: INFO: namespace: e2e-tests-gc-kh8rs, resource: bindings, ignored listing per whitelist Jul 1 11:10:28.482: INFO: namespace e2e-tests-gc-kh8rs deletion completed in 6.088162837s • [SLOW TEST:16.198 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:10:28.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d55d51da-9bf0-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:10:28.573: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-bmvm7" to be "success or failure" Jul 1 11:10:28.590: INFO: Pod "pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.806867ms Jul 1 11:10:30.596: INFO: Pod "pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022922742s Jul 1 11:10:32.603: INFO: Pod "pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030151588s STEP: Saw pod success Jul 1 11:10:32.604: INFO: Pod "pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:10:32.610: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jul 1 11:10:32.649: INFO: Waiting for pod pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006 to disappear Jul 1 11:10:32.656: INFO: Pod pod-projected-configmaps-d55de967-9bf0-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:10:32.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bmvm7" for this suite. Jul 1 11:10:40.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:10:40.719: INFO: namespace: e2e-tests-projected-bmvm7, resource: bindings, ignored listing per whitelist Jul 1 11:10:40.851: INFO: namespace e2e-tests-projected-bmvm7 deletion completed in 8.187499681s • [SLOW TEST:12.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:10:40.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:10:45.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bjxx6" for this suite. Jul 1 11:11:27.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:11:27.179: INFO: namespace: e2e-tests-kubelet-test-bjxx6, resource: bindings, ignored listing per whitelist Jul 1 11:11:27.234: INFO: namespace e2e-tests-kubelet-test-bjxx6 deletion completed in 42.188585309s • [SLOW TEST:46.383 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:11:27.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 1 11:11:27.390: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:11:32.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-dp2g6" for this suite. Jul 1 11:11:54.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:11:54.285: INFO: namespace: e2e-tests-init-container-dp2g6, resource: bindings, ignored listing per whitelist Jul 1 11:11:54.349: INFO: namespace e2e-tests-init-container-dp2g6 deletion completed in 22.159907064s • [SLOW TEST:27.115 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:11:54.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 1 11:11:54.611: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4dt48,SelfLink:/api/v1/namespaces/e2e-tests-watch-4dt48/configmaps/e2e-watch-test-watch-closed,UID:08a32b2f-9bf1-11e9-a678-fa163e0cec1d,ResourceVersion:1843581,Generation:0,CreationTimestamp:2019-07-01 11:11:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 11:11:54.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4dt48,SelfLink:/api/v1/namespaces/e2e-tests-watch-4dt48/configmaps/e2e-watch-test-watch-closed,UID:08a32b2f-9bf1-11e9-a678-fa163e0cec1d,ResourceVersion:1843582,Generation:0,CreationTimestamp:2019-07-01 11:11:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 1 11:11:54.663: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4dt48,SelfLink:/api/v1/namespaces/e2e-tests-watch-4dt48/configmaps/e2e-watch-test-watch-closed,UID:08a32b2f-9bf1-11e9-a678-fa163e0cec1d,ResourceVersion:1843583,Generation:0,CreationTimestamp:2019-07-01 11:11:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 11:11:54.663: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4dt48,SelfLink:/api/v1/namespaces/e2e-tests-watch-4dt48/configmaps/e2e-watch-test-watch-closed,UID:08a32b2f-9bf1-11e9-a678-fa163e0cec1d,ResourceVersion:1843584,Generation:0,CreationTimestamp:2019-07-01 11:11:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:11:54.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4dt48" for this suite. Jul 1 11:12:00.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:12:00.796: INFO: namespace: e2e-tests-watch-4dt48, resource: bindings, ignored listing per whitelist Jul 1 11:12:00.906: INFO: namespace e2e-tests-watch-4dt48 deletion completed in 6.228570393s • [SLOW TEST:6.557 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:12:00.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-d2cmz [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jul 1 11:12:01.014: INFO: Found 0 stateful pods, waiting for 3 Jul 1 11:12:11.020: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:12:11.020: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:12:11.020: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 1 11:12:11.065: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 1 11:12:21.158: INFO: Updating stateful set ss2 Jul 1 11:12:21.180: INFO: Waiting for Pod e2e-tests-statefulset-d2cmz/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 STEP: Restoring Pods to the correct revision when they are deleted Jul 1 11:12:31.528: INFO: Found 2 stateful pods, waiting for 3 Jul 1 11:12:41.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:12:41.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 11:12:41.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 1 11:12:41.576: INFO: Updating stateful set ss2 Jul 1 11:12:41.592: INFO: Waiting for Pod e2e-tests-statefulset-d2cmz/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 Jul 1 11:12:51.626: INFO: Updating stateful set ss2 Jul 1 11:12:51.643: INFO: Waiting for StatefulSet e2e-tests-statefulset-d2cmz/ss2 to complete update Jul 1 11:12:51.643: INFO: Waiting for Pod e2e-tests-statefulset-d2cmz/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 1 11:13:01.657: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d2cmz Jul 1 11:13:01.661: INFO: Scaling statefulset ss2 to 0 Jul 1 11:13:21.698: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 11:13:21.701: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:13:21.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-d2cmz" for this suite. Jul 1 11:13:27.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:13:28.000: INFO: namespace: e2e-tests-statefulset-d2cmz, resource: bindings, ignored listing per whitelist Jul 1 11:13:28.028: INFO: namespace e2e-tests-statefulset-d2cmz deletion completed in 6.274276188s • [SLOW TEST:87.121 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:13:28.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-406dbb1a-9bf1-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:13:28.225: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-2tfrx" to be "success or failure" Jul 1 11:13:28.234: INFO: Pod "pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201627ms Jul 1 11:13:30.276: INFO: Pod "pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051195257s Jul 1 11:13:32.284: INFO: Pod "pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058897943s STEP: Saw pod success Jul 1 11:13:32.284: INFO: Pod "pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:13:32.287: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jul 1 11:13:32.457: INFO: Waiting for pod pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006 to disappear Jul 1 11:13:32.471: INFO: Pod pod-projected-secrets-406fd44d-9bf1-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:13:32.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2tfrx" for this suite. Jul 1 11:13:38.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:13:38.602: INFO: namespace: e2e-tests-projected-2tfrx, resource: bindings, ignored listing per whitelist Jul 1 11:13:38.610: INFO: namespace e2e-tests-projected-2tfrx deletion completed in 6.134954632s • [SLOW TEST:10.582 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:13:38.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0701 11:14:09.287324 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 11:14:09.287: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:14:09.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vx67r" for this suite. Jul 1 11:14:15.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:14:15.450: INFO: namespace: e2e-tests-gc-vx67r, resource: bindings, ignored listing per whitelist Jul 1 11:14:15.481: INFO: namespace e2e-tests-gc-vx67r deletion completed in 6.189139216s • [SLOW TEST:36.871 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:14:15.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-xs7xc/secret-test-5cab39d7-9bf1-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:14:15.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-xs7xc" to be "success or failure" Jul 1 11:14:15.635: INFO: Pod "pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 54.97829ms Jul 1 11:14:17.644: INFO: Pod "pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063739079s Jul 1 11:14:19.651: INFO: Pod "pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071258977s STEP: Saw pod success Jul 1 11:14:19.651: INFO: Pod "pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:14:19.656: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006 container env-test: STEP: delete the pod Jul 1 11:14:19.710: INFO: Waiting for pod pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006 to disappear Jul 1 11:14:19.755: INFO: Pod pod-configmaps-5cabd3b7-9bf1-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:14:19.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xs7xc" for this suite. Jul 1 11:14:25.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:14:25.870: INFO: namespace: e2e-tests-secrets-xs7xc, resource: bindings, ignored listing per whitelist Jul 1 11:14:25.911: INFO: namespace e2e-tests-secrets-xs7xc deletion completed in 6.150121194s • [SLOW TEST:10.430 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:14:25.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-b5dmp STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 11:14:26.004: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 11:14:44.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-b5dmp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:14:44.146: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:14:44.423: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:14:44.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-b5dmp" for this suite. Jul 1 11:15:06.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:15:06.547: INFO: namespace: e2e-tests-pod-network-test-b5dmp, resource: bindings, ignored listing per whitelist Jul 1 11:15:06.571: INFO: namespace e2e-tests-pod-network-test-b5dmp deletion completed in 22.143003502s • [SLOW TEST:40.660 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:15:06.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 11:15:06.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-q845p' Jul 1 11:15:08.587: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 1 11:15:08.587: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jul 1 11:15:08.667: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-psrbw] Jul 1 11:15:08.667: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-psrbw" in namespace "e2e-tests-kubectl-q845p" to be "running and ready" Jul 1 11:15:08.679: INFO: Pod "e2e-test-nginx-rc-psrbw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.842022ms Jul 1 11:15:10.685: INFO: Pod "e2e-test-nginx-rc-psrbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017814363s Jul 1 11:15:12.691: INFO: Pod "e2e-test-nginx-rc-psrbw": Phase="Running", Reason="", readiness=true. Elapsed: 4.023182301s Jul 1 11:15:12.691: INFO: Pod "e2e-test-nginx-rc-psrbw" satisfied condition "running and ready" Jul 1 11:15:12.691: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-psrbw] Jul 1 11:15:12.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-q845p' Jul 1 11:15:12.845: INFO: stderr: "" Jul 1 11:15:12.845: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jul 1 11:15:12.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-q845p' Jul 1 11:15:12.951: INFO: stderr: "" Jul 1 11:15:12.951: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:15:12.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q845p" for this suite. Jul 1 11:15:34.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:15:35.099: INFO: namespace: e2e-tests-kubectl-q845p, resource: bindings, ignored listing per whitelist Jul 1 11:15:35.148: INFO: namespace e2e-tests-kubectl-q845p deletion completed in 22.192863845s • [SLOW TEST:28.577 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:15:35.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8c2f4a16-9bf1-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:15:35.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-28vhc" to be "success or failure" Jul 1 11:15:35.326: INFO: Pod "pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.750083ms Jul 1 11:15:37.360: INFO: Pod "pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054125662s Jul 1 11:15:39.397: INFO: Pod "pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091326409s STEP: Saw pod success Jul 1 11:15:39.397: INFO: Pod "pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:15:39.401: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006 container configmap-volume-test: STEP: delete the pod Jul 1 11:15:39.434: INFO: Waiting for pod pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006 to disappear Jul 1 11:15:39.443: INFO: Pod pod-configmaps-8c305550-9bf1-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:15:39.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-28vhc" for this suite. Jul 1 11:15:45.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:15:45.637: INFO: namespace: e2e-tests-configmap-28vhc, resource: bindings, ignored listing per whitelist Jul 1 11:15:45.651: INFO: namespace e2e-tests-configmap-28vhc deletion completed in 6.203379553s • [SLOW TEST:10.502 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:15:45.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 1 11:15:45.764: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 11:15:45.851: INFO: Waiting for terminating namespaces to be deleted... Jul 1 11:15:45.858: INFO: Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test Jul 1 11:15:45.874: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at (0 container statuses recorded) Jul 1 11:15:45.874: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded) Jul 1 11:15:45.874: INFO: Container weave ready: true, restart count 0 Jul 1 11:15:45.874: INFO: Container weave-npc ready: true, restart count 0 Jul 1 11:15:45.874: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at (0 container statuses recorded) Jul 1 11:15:45.874: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded) Jul 1 11:15:45.874: INFO: Container coredns ready: true, restart count 0 Jul 1 11:15:45.874: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded) Jul 1 11:15:45.874: INFO: Container coredns ready: true, restart count 0 Jul 1 11:15:45.874: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded) Jul 1 11:15:45.874: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 11:15:45.874: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at (0 container statuses recorded) Jul 1 11:15:45.874: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod coredns-86c58d9df4-99n2k requesting resource cpu=100m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod coredns-86c58d9df4-zdm4x requesting resource cpu=100m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod etcd-hunter-server-x6tdbol33slm requesting resource cpu=0m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod kube-apiserver-hunter-server-x6tdbol33slm requesting resource cpu=250m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod kube-controller-manager-hunter-server-x6tdbol33slm requesting resource cpu=200m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod kube-proxy-ww64l requesting resource cpu=0m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod kube-scheduler-hunter-server-x6tdbol33slm requesting resource cpu=100m on Node hunter-server-x6tdbol33slm Jul 1 11:15:46.001: INFO: Pod weave-net-z4vkv requesting resource cpu=20m on Node hunter-server-x6tdbol33slm STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-92929879-9bf1-11e9-9f49-0242ac110006.15ad444da425e166], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-kk842/filler-pod-92929879-9bf1-11e9-9f49-0242ac110006 to hunter-server-x6tdbol33slm] STEP: Considering event: Type = [Normal], Name = [filler-pod-92929879-9bf1-11e9-9f49-0242ac110006.15ad444dfc55d65d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-92929879-9bf1-11e9-9f49-0242ac110006.15ad444e09b51205], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-92929879-9bf1-11e9-9f49-0242ac110006.15ad444e1ef5435c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ad444e94af3d6a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-x6tdbol33slm STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:15:51.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kk842" for this suite. Jul 1 11:15:57.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:15:57.434: INFO: namespace: e2e-tests-sched-pred-kk842, resource: bindings, ignored listing per whitelist Jul 1 11:15:57.449: INFO: namespace e2e-tests-sched-pred-kk842 deletion completed in 6.287116032s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.798 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:15:57.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-99768d8c-9bf1-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:15:57.574: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-pqmf2" to be "success or failure" Jul 1 11:15:57.578: INFO: Pod "pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846033ms Jul 1 11:15:59.596: INFO: Pod "pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022365843s Jul 1 11:16:01.602: INFO: Pod "pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027957581s STEP: Saw pod success Jul 1 11:16:01.602: INFO: Pod "pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:16:01.606: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jul 1 11:16:01.648: INFO: Waiting for pod pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006 to disappear Jul 1 11:16:01.654: INFO: Pod pod-projected-secrets-997758b2-9bf1-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:16:01.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pqmf2" for this suite. Jul 1 11:16:07.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:16:07.706: INFO: namespace: e2e-tests-projected-pqmf2, resource: bindings, ignored listing per whitelist Jul 1 11:16:07.869: INFO: namespace e2e-tests-projected-pqmf2 deletion completed in 6.197176878s • [SLOW TEST:10.420 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:16:07.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jmx9f in namespace e2e-tests-proxy-k4hzh I0701 11:16:08.078972 8 runners.go:184] Created replication controller with name: proxy-service-jmx9f, namespace: e2e-tests-proxy-k4hzh, replica count: 1 I0701 11:16:09.129326 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 11:16:10.129501 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 11:16:11.129678 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:12.129864 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:13.130131 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:14.130311 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:15.130471 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:16.130648 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:17.130863 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:18.131051 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:19.131217 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 11:16:20.131416 8 runners.go:184] proxy-service-jmx9f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 11:16:20.136: INFO: setup took 12.102817321s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 1 11:16:20.163: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-k4hzh/pods/proxy-service-jmx9f-sqttl:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 1 11:16:29.339: INFO: namespace e2e-tests-kubectl-8jg7q Jul 1 11:16:29.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8jg7q' Jul 1 11:16:29.524: INFO: stderr: "" Jul 1 11:16:29.524: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 1 11:16:30.528: INFO: Selector matched 1 pods for map[app:redis] Jul 1 11:16:30.528: INFO: Found 0 / 1 Jul 1 11:16:31.530: INFO: Selector matched 1 pods for map[app:redis] Jul 1 11:16:31.530: INFO: Found 0 / 1 Jul 1 11:16:32.531: INFO: Selector matched 1 pods for map[app:redis] Jul 1 11:16:32.531: INFO: Found 1 / 1 Jul 1 11:16:32.531: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 11:16:32.535: INFO: Selector matched 1 pods for map[app:redis] Jul 1 11:16:32.535: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 11:16:32.535: INFO: wait on redis-master startup in e2e-tests-kubectl-8jg7q Jul 1 11:16:32.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9vbbr redis-master --namespace=e2e-tests-kubectl-8jg7q' Jul 1 11:16:32.695: INFO: stderr: "" Jul 1 11:16:32.695: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Jul 11:16:31.481 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 11:16:31.481 # Server started, Redis version 3.2.12\n1:M 01 Jul 11:16:31.481 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 11:16:31.481 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 1 11:16:32.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8jg7q' Jul 1 11:16:32.949: INFO: stderr: "" Jul 1 11:16:32.949: INFO: stdout: "service/rm2 exposed\n" Jul 1 11:16:32.952: INFO: Service rm2 in namespace e2e-tests-kubectl-8jg7q found. STEP: exposing service Jul 1 11:16:34.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8jg7q' Jul 1 11:16:35.132: INFO: stderr: "" Jul 1 11:16:35.132: INFO: stdout: "service/rm3 exposed\n" Jul 1 11:16:35.147: INFO: Service rm3 in namespace e2e-tests-kubectl-8jg7q found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:16:37.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8jg7q" for this suite. Jul 1 11:17:01.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:17:01.272: INFO: namespace: e2e-tests-kubectl-8jg7q, resource: bindings, ignored listing per whitelist Jul 1 11:17:01.280: INFO: namespace e2e-tests-kubectl-8jg7q deletion completed in 24.118789937s • [SLOW TEST:32.569 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:17:01.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 11:17:09.493: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:09.510: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:11.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:11.515: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:13.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:13.516: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:15.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:15.515: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:17.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:17.515: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:19.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:19.517: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:21.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:21.514: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:23.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:23.519: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:25.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:25.531: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:27.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:27.515: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 11:17:29.510: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 11:17:29.515: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:17:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-r7b8s" for this suite. Jul 1 11:17:51.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:17:51.615: INFO: namespace: e2e-tests-container-lifecycle-hook-r7b8s, resource: bindings, ignored listing per whitelist Jul 1 11:17:51.708: INFO: namespace e2e-tests-container-lifecycle-hook-r7b8s deletion completed in 22.177217251s • [SLOW TEST:50.428 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:17:51.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:17:51.839: INFO: Creating ReplicaSet my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006 Jul 1 11:17:51.859: INFO: Pod name my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006: Found 0 pods out of 1 Jul 1 11:17:56.866: INFO: Pod name my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006: Found 1 pods out of 1 Jul 1 11:17:56.866: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006" is running Jul 1 11:17:56.870: INFO: Pod "my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006-d6lcj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 11:17:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 11:17:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 11:17:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 11:17:51 +0000 UTC Reason: Message:}]) Jul 1 11:17:56.870: INFO: Trying to dial the pod Jul 1 11:18:01.887: INFO: Controller my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006: Got expected result from replica 1 [my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006-d6lcj]: "my-hostname-basic-dd93f892-9bf1-11e9-9f49-0242ac110006-d6lcj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:18:01.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6vs7g" for this suite. Jul 1 11:18:07.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:18:07.954: INFO: namespace: e2e-tests-replicaset-6vs7g, resource: bindings, ignored listing per whitelist Jul 1 11:18:08.035: INFO: namespace e2e-tests-replicaset-6vs7g deletion completed in 6.14370401s • [SLOW TEST:16.327 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:18:08.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e747934a-9bf1-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:18:08.158: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-mnwnw" to be "success or failure" Jul 1 11:18:08.166: INFO: Pod "pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.82221ms Jul 1 11:18:10.237: INFO: Pod "pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07882122s Jul 1 11:18:12.244: INFO: Pod "pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085097032s STEP: Saw pod success Jul 1 11:18:12.244: INFO: Pod "pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:18:12.248: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jul 1 11:18:12.288: INFO: Waiting for pod pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006 to disappear Jul 1 11:18:12.302: INFO: Pod pod-projected-secrets-e748b41f-9bf1-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:18:12.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mnwnw" for this suite. Jul 1 11:18:18.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:18:18.425: INFO: namespace: e2e-tests-projected-mnwnw, resource: bindings, ignored listing per whitelist Jul 1 11:18:18.438: INFO: namespace e2e-tests-projected-mnwnw deletion completed in 6.12627959s • [SLOW TEST:10.403 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:18:18.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-2dnqr STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-2dnqr STEP: Deleting pre-stop pod Jul 1 11:18:31.649: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:18:31.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-2dnqr" for this suite. Jul 1 11:19:09.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:19:09.777: INFO: namespace: e2e-tests-prestop-2dnqr, resource: bindings, ignored listing per whitelist Jul 1 11:19:09.783: INFO: namespace e2e-tests-prestop-2dnqr deletion completed in 38.10777503s • [SLOW TEST:51.345 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:19:09.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 11:19:09.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-57klq" to be "success or failure" Jul 1 11:19:09.900: INFO: Pod "downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.882505ms Jul 1 11:19:11.905: INFO: Pod "downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018240422s Jul 1 11:19:13.910: INFO: Pod "downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02355527s STEP: Saw pod success Jul 1 11:19:13.910: INFO: Pod "downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:19:13.915: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006 container client-container: STEP: delete the pod Jul 1 11:19:14.024: INFO: Waiting for pod downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:19:14.027: INFO: Pod downwardapi-volume-0c17d9d0-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:19:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-57klq" for this suite. Jul 1 11:19:20.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:19:20.123: INFO: namespace: e2e-tests-downward-api-57klq, resource: bindings, ignored listing per whitelist Jul 1 11:19:20.146: INFO: namespace e2e-tests-downward-api-57klq deletion completed in 6.115270839s • [SLOW TEST:10.363 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:19:20.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 1 11:19:20.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8sd4r' Jul 1 11:19:20.359: INFO: stderr: "" Jul 1 11:19:20.359: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jul 1 11:19:20.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8sd4r' Jul 1 11:19:35.746: INFO: stderr: "" Jul 1 11:19:35.746: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:19:35.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8sd4r" for this suite. Jul 1 11:19:41.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:19:41.906: INFO: namespace: e2e-tests-kubectl-8sd4r, resource: bindings, ignored listing per whitelist Jul 1 11:19:41.932: INFO: namespace e2e-tests-kubectl-8sd4r deletion completed in 6.181907756s • [SLOW TEST:21.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:19:41.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1f42b92d-9bf2-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:19:42.184: INFO: Waiting up to 5m0s for pod "pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-qjck4" to be "success or failure" Jul 1 11:19:42.213: INFO: Pod "pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 28.07716ms Jul 1 11:19:44.227: INFO: Pod "pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042207231s Jul 1 11:19:46.233: INFO: Pod "pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048105387s STEP: Saw pod success Jul 1 11:19:46.233: INFO: Pod "pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:19:46.236: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006 container secret-volume-test: STEP: delete the pod Jul 1 11:19:46.297: INFO: Waiting for pod pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:19:46.304: INFO: Pod pod-secrets-1f5804be-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:19:46.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qjck4" for this suite. Jul 1 11:19:52.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:19:52.411: INFO: namespace: e2e-tests-secrets-qjck4, resource: bindings, ignored listing per whitelist Jul 1 11:19:52.460: INFO: namespace e2e-tests-secrets-qjck4 deletion completed in 6.152366309s STEP: Destroying namespace "e2e-tests-secret-namespace-v8djv" for this suite. Jul 1 11:19:58.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:19:58.555: INFO: namespace: e2e-tests-secret-namespace-v8djv, resource: bindings, ignored listing per whitelist Jul 1 11:19:58.559: INFO: namespace e2e-tests-secret-namespace-v8djv deletion completed in 6.098865556s • [SLOW TEST:16.627 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:19:58.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-zsss8/configmap-test-292a0ace-9bf2-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:19:58.666: INFO: Waiting up to 5m0s for pod "pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-zsss8" to be "success or failure" Jul 1 11:19:58.751: INFO: Pod "pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 84.93231ms Jul 1 11:20:00.756: INFO: Pod "pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089997276s Jul 1 11:20:02.760: INFO: Pod "pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094069819s STEP: Saw pod success Jul 1 11:20:02.760: INFO: Pod "pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:20:02.763: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006 container env-test: STEP: delete the pod Jul 1 11:20:02.896: INFO: Waiting for pod pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:20:02.900: INFO: Pod pod-configmaps-292a9bb3-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:20:02.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zsss8" for this suite. Jul 1 11:20:08.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:20:09.056: INFO: namespace: e2e-tests-configmap-zsss8, resource: bindings, ignored listing per whitelist Jul 1 11:20:09.059: INFO: namespace e2e-tests-configmap-zsss8 deletion completed in 6.156005308s • [SLOW TEST:10.500 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:20:09.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 1 11:20:09.216: INFO: Waiting up to 5m0s for pod "pod-2f732a71-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-9wxvm" to be "success or failure" Jul 1 11:20:09.219: INFO: Pod "pod-2f732a71-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363288ms Jul 1 11:20:11.223: INFO: Pod "pod-2f732a71-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006167628s Jul 1 11:20:13.228: INFO: Pod "pod-2f732a71-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011787164s STEP: Saw pod success Jul 1 11:20:13.228: INFO: Pod "pod-2f732a71-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:20:13.232: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-2f732a71-9bf2-11e9-9f49-0242ac110006 container test-container: STEP: delete the pod Jul 1 11:20:13.290: INFO: Waiting for pod pod-2f732a71-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:20:13.347: INFO: Pod pod-2f732a71-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:20:13.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9wxvm" for this suite. Jul 1 11:20:19.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:20:19.576: INFO: namespace: e2e-tests-emptydir-9wxvm, resource: bindings, ignored listing per whitelist Jul 1 11:20:19.649: INFO: namespace e2e-tests-emptydir-9wxvm deletion completed in 6.294423211s • [SLOW TEST:10.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:20:19.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-26kc STEP: Creating a pod to test atomic-volume-subpath Jul 1 11:20:19.850: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-26kc" in namespace "e2e-tests-subpath-275r4" to be "success or failure" Jul 1 11:20:19.854: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.982909ms Jul 1 11:20:21.870: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020434599s Jul 1 11:20:23.875: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025177394s Jul 1 11:20:25.880: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 6.029928857s Jul 1 11:20:27.884: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 8.034635498s Jul 1 11:20:29.893: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 10.043173442s Jul 1 11:20:31.896: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 12.046647774s Jul 1 11:20:33.900: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 14.050672373s Jul 1 11:20:35.905: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 16.055858818s Jul 1 11:20:37.909: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 18.059883633s Jul 1 11:20:39.915: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 20.065448332s Jul 1 11:20:41.920: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 22.070864459s Jul 1 11:20:43.927: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Running", Reason="", readiness=false. Elapsed: 24.077158044s Jul 1 11:20:45.935: INFO: Pod "pod-subpath-test-projected-26kc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.0855314s STEP: Saw pod success Jul 1 11:20:45.935: INFO: Pod "pod-subpath-test-projected-26kc" satisfied condition "success or failure" Jul 1 11:20:45.940: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-projected-26kc container test-container-subpath-projected-26kc: STEP: delete the pod Jul 1 11:20:46.011: INFO: Waiting for pod pod-subpath-test-projected-26kc to disappear Jul 1 11:20:46.017: INFO: Pod pod-subpath-test-projected-26kc no longer exists STEP: Deleting pod pod-subpath-test-projected-26kc Jul 1 11:20:46.018: INFO: Deleting pod "pod-subpath-test-projected-26kc" in namespace "e2e-tests-subpath-275r4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:20:46.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-275r4" for this suite. Jul 1 11:20:52.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:20:52.254: INFO: namespace: e2e-tests-subpath-275r4, resource: bindings, ignored listing per whitelist Jul 1 11:20:52.260: INFO: namespace e2e-tests-subpath-275r4 deletion completed in 6.232277218s • [SLOW TEST:32.611 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:20:52.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:20:59.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5xkrf" for this suite. Jul 1 11:21:21.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:21:21.546: INFO: namespace: e2e-tests-replication-controller-5xkrf, resource: bindings, ignored listing per whitelist Jul 1 11:21:21.663: INFO: namespace e2e-tests-replication-controller-5xkrf deletion completed in 22.197953714s • [SLOW TEST:29.403 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:21:21.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5ac50fc9-9bf2-11e9-9f49-0242ac110006 STEP: Creating configMap with name cm-test-opt-upd-5ac5103b-9bf2-11e9-9f49-0242ac110006 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5ac50fc9-9bf2-11e9-9f49-0242ac110006 STEP: Updating configmap cm-test-opt-upd-5ac5103b-9bf2-11e9-9f49-0242ac110006 STEP: Creating configMap with name cm-test-opt-create-5ac5106c-9bf2-11e9-9f49-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:21:30.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bqznh" for this suite. Jul 1 11:21:52.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:21:52.304: INFO: namespace: e2e-tests-projected-bqznh, resource: bindings, ignored listing per whitelist Jul 1 11:21:52.318: INFO: namespace e2e-tests-projected-bqznh deletion completed in 22.158900025s • [SLOW TEST:30.655 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:21:52.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0701 11:22:32.466616 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 11:22:32.466: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:22:32.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lw45v" for this suite. Jul 1 11:22:42.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:22:42.558: INFO: namespace: e2e-tests-gc-lw45v, resource: bindings, ignored listing per whitelist Jul 1 11:22:42.662: INFO: namespace e2e-tests-gc-lw45v deletion completed in 10.19099794s • [SLOW TEST:50.344 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:22:42.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:22:43.439: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8b42a061-9bf2-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc001a2e4ca), BlockOwnerDeletion:(*bool)(0xc001a2e4cb)}} Jul 1 11:22:43.487: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8b3e17de-9bf2-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc0018d38ea), BlockOwnerDeletion:(*bool)(0xc0018d38eb)}} Jul 1 11:22:43.504: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8b3f2b3a-9bf2-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc001bd89f2), BlockOwnerDeletion:(*bool)(0xc001bd89f3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:22:48.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mhq2p" for this suite. Jul 1 11:22:54.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:22:54.741: INFO: namespace: e2e-tests-gc-mhq2p, resource: bindings, ignored listing per whitelist Jul 1 11:22:54.789: INFO: namespace e2e-tests-gc-mhq2p deletion completed in 6.245267963s • [SLOW TEST:12.126 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:22:54.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 1 11:22:54.993: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-fp2pv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fp2pv/configmaps/e2e-watch-test-resource-version,UID:9234459c-9bf2-11e9-a678-fa163e0cec1d,ResourceVersion:1845650,Generation:0,CreationTimestamp:2019-07-01 11:22:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 11:22:54.993: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-fp2pv,SelfLink:/api/v1/namespaces/e2e-tests-watch-fp2pv/configmaps/e2e-watch-test-resource-version,UID:9234459c-9bf2-11e9-a678-fa163e0cec1d,ResourceVersion:1845651,Generation:0,CreationTimestamp:2019-07-01 11:22:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:22:54.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fp2pv" for this suite. Jul 1 11:23:01.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:23:01.063: INFO: namespace: e2e-tests-watch-fp2pv, resource: bindings, ignored listing per whitelist Jul 1 11:23:01.149: INFO: namespace e2e-tests-watch-fp2pv deletion completed in 6.136554724s • [SLOW TEST:6.360 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:23:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 1 11:23:11.325: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:11.325: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:11.559: INFO: Exec stderr: "" Jul 1 11:23:11.559: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:11.559: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:11.742: INFO: Exec stderr: "" Jul 1 11:23:11.742: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:11.742: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:11.919: INFO: Exec stderr: "" Jul 1 11:23:11.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:11.919: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.128: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 1 11:23:12.128: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.128: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.290: INFO: Exec stderr: "" Jul 1 11:23:12.290: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.290: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.419: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 1 11:23:12.419: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.419: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.558: INFO: Exec stderr: "" Jul 1 11:23:12.558: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.558: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.697: INFO: Exec stderr: "" Jul 1 11:23:12.697: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.697: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.836: INFO: Exec stderr: "" Jul 1 11:23:12.836: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-vqwzs PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:23:12.836: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:23:12.995: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:23:12.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-vqwzs" for this suite. Jul 1 11:23:53.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:23:53.107: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-vqwzs, resource: bindings, ignored listing per whitelist Jul 1 11:23:53.133: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-vqwzs deletion completed in 40.133726993s • [SLOW TEST:51.984 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:23:53.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w5swg STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 11:23:53.205: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 1 11:24:13.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-w5swg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 11:24:13.303: INFO: >>> kubeConfig: /root/.kube/config Jul 1 11:24:13.488: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:24:13.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-w5swg" for this suite. Jul 1 11:24:37.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:24:37.528: INFO: namespace: e2e-tests-pod-network-test-w5swg, resource: bindings, ignored listing per whitelist Jul 1 11:24:37.595: INFO: namespace e2e-tests-pod-network-test-w5swg deletion completed in 24.102877254s • [SLOW TEST:44.461 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:24:37.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jul 1 11:24:37.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:37.970: INFO: stderr: "" Jul 1 11:24:37.970: INFO: stdout: "pod/pause created\n" Jul 1 11:24:37.970: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 1 11:24:37.970: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-7lzmt" to be "running and ready" Jul 1 11:24:37.977: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.315119ms Jul 1 11:24:39.981: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011264964s Jul 1 11:24:41.985: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01534978s Jul 1 11:24:41.985: INFO: Pod "pause" satisfied condition "running and ready" Jul 1 11:24:41.985: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jul 1 11:24:41.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.099: INFO: stderr: "" Jul 1 11:24:42.099: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 1 11:24:42.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.182: INFO: stderr: "" Jul 1 11:24:42.182: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 1 11:24:42.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.270: INFO: stderr: "" Jul 1 11:24:42.270: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 1 11:24:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.347: INFO: stderr: "" Jul 1 11:24:42.347: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jul 1 11:24:42.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:24:42.498: INFO: stdout: "pod \"pause\" force deleted\n" Jul 1 11:24:42.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-7lzmt' Jul 1 11:24:42.639: INFO: stderr: "No resources found.\n" Jul 1 11:24:42.639: INFO: stdout: "" Jul 1 11:24:42.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-7lzmt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 11:24:42.848: INFO: stderr: "" Jul 1 11:24:42.848: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:24:42.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7lzmt" for this suite. Jul 1 11:24:48.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:24:48.923: INFO: namespace: e2e-tests-kubectl-7lzmt, resource: bindings, ignored listing per whitelist Jul 1 11:24:48.953: INFO: namespace e2e-tests-kubectl-7lzmt deletion completed in 6.101004836s • [SLOW TEST:11.359 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:24:48.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 11:24:53.653: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d6415090-9bf2-11e9-9f49-0242ac110006" Jul 1 11:24:53.653: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d6415090-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-pods-sh4kc" to be "terminated due to deadline exceeded" Jul 1 11:24:53.664: INFO: Pod "pod-update-activedeadlineseconds-d6415090-9bf2-11e9-9f49-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 11.178842ms Jul 1 11:24:55.676: INFO: Pod "pod-update-activedeadlineseconds-d6415090-9bf2-11e9-9f49-0242ac110006": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02271656s Jul 1 11:24:55.676: INFO: Pod "pod-update-activedeadlineseconds-d6415090-9bf2-11e9-9f49-0242ac110006" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:24:55.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sh4kc" for this suite. Jul 1 11:25:01.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:25:01.746: INFO: namespace: e2e-tests-pods-sh4kc, resource: bindings, ignored listing per whitelist Jul 1 11:25:01.871: INFO: namespace e2e-tests-pods-sh4kc deletion completed in 6.189575949s • [SLOW TEST:12.918 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:25:01.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-de049761-9bf2-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:25:02.093: INFO: Waiting up to 5m0s for pod "pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-5t6tg" to be "success or failure" Jul 1 11:25:02.116: INFO: Pod "pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.816872ms Jul 1 11:25:04.120: INFO: Pod "pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026955363s Jul 1 11:25:06.126: INFO: Pod "pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033425471s STEP: Saw pod success Jul 1 11:25:06.126: INFO: Pod "pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:25:06.130: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006 container configmap-volume-test: STEP: delete the pod Jul 1 11:25:06.161: INFO: Waiting for pod pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:25:06.164: INFO: Pod pod-configmaps-de05cebf-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:25:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5t6tg" for this suite. Jul 1 11:25:12.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:25:12.300: INFO: namespace: e2e-tests-configmap-5t6tg, resource: bindings, ignored listing per whitelist Jul 1 11:25:12.335: INFO: namespace e2e-tests-configmap-5t6tg deletion completed in 6.168030443s • [SLOW TEST:10.463 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:25:12.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 1 11:25:12.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-snxv4" to be "success or failure" Jul 1 11:25:12.479: INFO: Pod "downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.991098ms Jul 1 11:25:14.487: INFO: Pod "downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0159667s Jul 1 11:25:16.495: INFO: Pod "downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023458226s STEP: Saw pod success Jul 1 11:25:16.495: INFO: Pod "downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:25:16.503: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006 container client-container: STEP: delete the pod Jul 1 11:25:16.553: INFO: Waiting for pod downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:25:16.557: INFO: Pod downwardapi-volume-e4351fca-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:25:16.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-snxv4" for this suite. Jul 1 11:25:22.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:25:22.705: INFO: namespace: e2e-tests-downward-api-snxv4, resource: bindings, ignored listing per whitelist Jul 1 11:25:22.722: INFO: namespace e2e-tests-downward-api-snxv4 deletion completed in 6.160646009s • [SLOW TEST:10.387 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:25:22.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-ea6ead8c-9bf2-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume configMaps Jul 1 11:25:22.914: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-9nwvl" to be "success or failure" Jul 1 11:25:23.053: INFO: Pod "pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 138.670506ms Jul 1 11:25:25.056: INFO: Pod "pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142260405s Jul 1 11:25:27.064: INFO: Pod "pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150311814s STEP: Saw pod success Jul 1 11:25:27.064: INFO: Pod "pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:25:27.069: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jul 1 11:25:27.153: INFO: Waiting for pod pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006 to disappear Jul 1 11:25:27.168: INFO: Pod pod-projected-configmaps-ea6f1577-9bf2-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:25:27.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9nwvl" for this suite. Jul 1 11:25:33.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:25:33.238: INFO: namespace: e2e-tests-projected-9nwvl, resource: bindings, ignored listing per whitelist Jul 1 11:25:33.309: INFO: namespace e2e-tests-projected-9nwvl deletion completed in 6.134659942s • [SLOW TEST:10.586 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:25:33.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-d4lsr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d4lsr to expose endpoints map[] Jul 1 11:25:33.496: INFO: Get endpoints failed (13.446307ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 1 11:25:34.501: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d4lsr exposes endpoints map[] (1.018405363s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-d4lsr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d4lsr to expose endpoints map[pod1:[80]] Jul 1 11:25:37.555: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d4lsr exposes endpoints map[pod1:[80]] (3.044093304s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-d4lsr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d4lsr to expose endpoints map[pod1:[80] pod2:[80]] Jul 1 11:25:40.635: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d4lsr exposes endpoints map[pod1:[80] pod2:[80]] (3.073906298s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-d4lsr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d4lsr to expose endpoints map[pod2:[80]] Jul 1 11:25:41.682: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d4lsr exposes endpoints map[pod2:[80]] (1.039201397s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-d4lsr STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-d4lsr to expose endpoints map[] Jul 1 11:25:42.753: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-d4lsr exposes endpoints map[] (1.065181262s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:25:42.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-d4lsr" for this suite. Jul 1 11:26:04.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:26:04.987: INFO: namespace: e2e-tests-services-d4lsr, resource: bindings, ignored listing per whitelist Jul 1 11:26:05.028: INFO: namespace e2e-tests-services-d4lsr deletion completed in 22.201084398s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.719 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:26:05.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-03a1e83c-9bf3-11e9-9f49-0242ac110006 STEP: Creating a pod to test consume secrets Jul 1 11:26:05.198: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-2hpd8" to be "success or failure" Jul 1 11:26:05.204: INFO: Pod "pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.794519ms Jul 1 11:26:07.209: INFO: Pod "pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010598891s Jul 1 11:26:09.214: INFO: Pod "pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01566727s STEP: Saw pod success Jul 1 11:26:09.214: INFO: Pod "pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:26:09.218: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jul 1 11:26:09.309: INFO: Waiting for pod pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006 to disappear Jul 1 11:26:09.321: INFO: Pod pod-projected-secrets-03a2ec15-9bf3-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:26:09.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2hpd8" for this suite. Jul 1 11:26:15.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:26:15.455: INFO: namespace: e2e-tests-projected-2hpd8, resource: bindings, ignored listing per whitelist Jul 1 11:26:15.532: INFO: namespace e2e-tests-projected-2hpd8 deletion completed in 6.205121677s • [SLOW TEST:10.503 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:26:15.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-09dbd228-9bf3-11e9-9f49-0242ac110006 STEP: Creating secret with name secret-projected-all-test-volume-09dbd1f1-9bf3-11e9-9f49-0242ac110006 STEP: Creating a pod to test Check all projections for projected volume plugin Jul 1 11:26:15.682: INFO: Waiting up to 5m0s for pod "projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-xznxt" to be "success or failure" Jul 1 11:26:15.693: INFO: Pod "projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.202243ms Jul 1 11:26:17.712: INFO: Pod "projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030327085s Jul 1 11:26:19.720: INFO: Pod "projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038719296s STEP: Saw pod success Jul 1 11:26:19.721: INFO: Pod "projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure" Jul 1 11:26:19.725: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006 container projected-all-volume-test: STEP: delete the pod Jul 1 11:26:19.836: INFO: Waiting for pod projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006 to disappear Jul 1 11:26:19.840: INFO: Pod projected-volume-09dbd177-9bf3-11e9-9f49-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:26:19.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xznxt" for this suite. Jul 1 11:26:25.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:26:25.943: INFO: namespace: e2e-tests-projected-xznxt, resource: bindings, ignored listing per whitelist Jul 1 11:26:25.972: INFO: namespace e2e-tests-projected-xznxt deletion completed in 6.128445555s • [SLOW TEST:10.440 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:26:25.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-qbz9 STEP: Creating a pod to test atomic-volume-subpath Jul 1 11:26:26.153: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qbz9" in namespace "e2e-tests-subpath-gfkvn" to be "success or failure" Jul 1 11:26:26.186: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Pending", Reason="", readiness=false. Elapsed: 33.419984ms Jul 1 11:26:28.190: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037022226s Jul 1 11:26:30.194: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041684098s Jul 1 11:26:32.198: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 6.04569541s Jul 1 11:26:34.203: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 8.050606803s Jul 1 11:26:36.208: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 10.055672608s Jul 1 11:26:38.213: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 12.060666329s Jul 1 11:26:40.218: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 14.065314283s Jul 1 11:26:42.223: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 16.070164385s Jul 1 11:26:44.227: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 18.074372796s Jul 1 11:26:46.233: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 20.080227923s Jul 1 11:26:48.236: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Running", Reason="", readiness=false. Elapsed: 22.083609642s Jul 1 11:26:50.285: INFO: Pod "pod-subpath-test-configmap-qbz9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.132067765s STEP: Saw pod success Jul 1 11:26:50.285: INFO: Pod "pod-subpath-test-configmap-qbz9" satisfied condition "success or failure" Jul 1 11:26:50.290: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-configmap-qbz9 container test-container-subpath-configmap-qbz9: STEP: delete the pod Jul 1 11:26:50.340: INFO: Waiting for pod pod-subpath-test-configmap-qbz9 to disappear Jul 1 11:26:50.344: INFO: Pod pod-subpath-test-configmap-qbz9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qbz9 Jul 1 11:26:50.345: INFO: Deleting pod "pod-subpath-test-configmap-qbz9" in namespace "e2e-tests-subpath-gfkvn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:26:50.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-gfkvn" for this suite. Jul 1 11:26:56.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:26:56.501: INFO: namespace: e2e-tests-subpath-gfkvn, resource: bindings, ignored listing per whitelist Jul 1 11:26:56.587: INFO: namespace e2e-tests-subpath-gfkvn deletion completed in 6.234075783s • [SLOW TEST:30.615 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:26:56.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 11:27:01.289: INFO: Successfully updated pod "annotationupdate2258182e-9bf3-11e9-9f49-0242ac110006" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:27:03.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j5b27" for this suite. Jul 1 11:27:25.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:27:25.509: INFO: namespace: e2e-tests-projected-j5b27, resource: bindings, ignored listing per whitelist Jul 1 11:27:25.520: INFO: namespace e2e-tests-projected-j5b27 deletion completed in 22.181140976s • [SLOW TEST:28.933 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:27:25.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 1 11:27:30.220: INFO: Successfully updated pod "labelsupdate33952232-9bf3-11e9-9f49-0242ac110006" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:27:32.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6czhh" for this suite. Jul 1 11:27:56.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:27:56.473: INFO: namespace: e2e-tests-projected-6czhh, resource: bindings, ignored listing per whitelist Jul 1 11:27:56.522: INFO: namespace e2e-tests-projected-6czhh deletion completed in 24.21870084s • [SLOW TEST:31.002 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:27:56.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jul 1 11:27:56.659: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 1 11:27:56.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:58.406: INFO: stderr: "" Jul 1 11:27:58.406: INFO: stdout: "service/redis-slave created\n" Jul 1 11:27:58.406: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 1 11:27:58.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:58.764: INFO: stderr: "" Jul 1 11:27:58.764: INFO: stdout: "service/redis-master created\n" Jul 1 11:27:58.764: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 1 11:27:58.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:59.048: INFO: stderr: "" Jul 1 11:27:59.048: INFO: stdout: "service/frontend created\n" Jul 1 11:27:59.049: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 1 11:27:59.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:59.289: INFO: stderr: "" Jul 1 11:27:59.289: INFO: stdout: "deployment.extensions/frontend created\n" Jul 1 11:27:59.290: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 11:27:59.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:59.559: INFO: stderr: "" Jul 1 11:27:59.559: INFO: stdout: "deployment.extensions/redis-master created\n" Jul 1 11:27:59.559: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 1 11:27:59.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:27:59.884: INFO: stderr: "" Jul 1 11:27:59.884: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jul 1 11:27:59.884: INFO: Waiting for all frontend pods to be Running. Jul 1 11:28:09.934: INFO: Waiting for frontend to serve content. Jul 1 11:28:10.532: INFO: Trying to add a new entry to the guestbook. Jul 1 11:28:10.604: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 1 11:28:10.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:10.921: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:10.921: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 1 11:28:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:11.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:11.082: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 11:28:11.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:11.223: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:11.223: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 11:28:11.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:11.370: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:11.370: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 11:28:11.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:11.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:11.577: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 11:28:11.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-czbn4' Jul 1 11:28:11.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 11:28:11.770: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:28:11.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-czbn4" for this suite. Jul 1 11:28:52.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:28:52.194: INFO: namespace: e2e-tests-kubectl-czbn4, resource: bindings, ignored listing per whitelist Jul 1 11:28:52.304: INFO: namespace e2e-tests-kubectl-czbn4 deletion completed in 40.514333404s • [SLOW TEST:55.781 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:28:52.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 1 11:28:52.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846728,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 1 11:28:52.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846729,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 1 11:28:52.483: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846730,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 1 11:29:02.538: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846744,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 1 11:29:02.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846745,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 1 11:29:02.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tnrdn,SelfLink:/api/v1/namespaces/e2e-tests-watch-tnrdn/configmaps/e2e-watch-test-label-changed,UID:67555608-9bf3-11e9-a678-fa163e0cec1d,ResourceVersion:1846746,Generation:0,CreationTimestamp:2019-07-01 11:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:29:02.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-tnrdn" for this suite. Jul 1 11:29:08.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:29:08.673: INFO: namespace: e2e-tests-watch-tnrdn, resource: bindings, ignored listing per whitelist Jul 1 11:29:08.743: INFO: namespace e2e-tests-watch-tnrdn deletion completed in 6.198968334s • [SLOW TEST:16.439 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:29:08.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:29:08.828: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 1 11:29:09.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-wkh65" for this suite. Jul 1 11:29:15.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 1 11:29:16.040: INFO: namespace: e2e-tests-custom-resource-definition-wkh65, resource: bindings, ignored listing per whitelist Jul 1 11:29:16.065: INFO: namespace e2e-tests-custom-resource-definition-wkh65 deletion completed in 6.14136781s • [SLOW TEST:7.322 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 1 11:29:16.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 1 11:29:16.241: INFO: (0) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 7.741172ms)
Jul  1 11:29:16.249: INFO: (1) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.180818ms)
Jul  1 11:29:16.254: INFO: (2) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.356885ms)
Jul  1 11:29:16.259: INFO: (3) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.957468ms)
Jul  1 11:29:16.267: INFO: (4) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.466575ms)
Jul  1 11:29:16.271: INFO: (5) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.621175ms)
Jul  1 11:29:16.275: INFO: (6) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.058453ms)
Jul  1 11:29:16.281: INFO: (7) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.855591ms)
Jul  1 11:29:16.286: INFO: (8) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.459977ms)
Jul  1 11:29:16.289: INFO: (9) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.555322ms)
Jul  1 11:29:16.294: INFO: (10) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.074578ms)
Jul  1 11:29:16.299: INFO: (11) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.512464ms)
Jul  1 11:29:16.302: INFO: (12) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.653011ms)
Jul  1 11:29:16.305: INFO: (13) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.027272ms)
Jul  1 11:29:16.309: INFO: (14) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.250814ms)
Jul  1 11:29:16.368: INFO: (15) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.599682ms)
Jul  1 11:29:16.373: INFO: (16) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.809296ms)
Jul  1 11:29:16.377: INFO: (17) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.16556ms)
Jul  1 11:29:16.381: INFO: (18) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.540778ms)
Jul  1 11:29:16.386: INFO: (19) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.741117ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:29:16.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-pswb4" for this suite.
Jul  1 11:29:22.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:29:22.522: INFO: namespace: e2e-tests-proxy-pswb4, resource: bindings, ignored listing per whitelist
Jul  1 11:29:22.560: INFO: namespace e2e-tests-proxy-pswb4 deletion completed in 6.169753177s

• [SLOW TEST:6.495 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:29:22.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-tfn5
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 11:29:22.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tfn5" in namespace "e2e-tests-subpath-5g2j5" to be "success or failure"
Jul  1 11:29:22.714: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253653ms
Jul  1 11:29:24.773: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063143299s
Jul  1 11:29:26.776: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067046595s
Jul  1 11:29:28.780: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 6.070485339s
Jul  1 11:29:30.798: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 8.088663631s
Jul  1 11:29:32.804: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 10.09445912s
Jul  1 11:29:34.809: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 12.099534447s
Jul  1 11:29:36.814: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 14.104718628s
Jul  1 11:29:38.820: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 16.110871958s
Jul  1 11:29:40.824: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 18.114614696s
Jul  1 11:29:42.829: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 20.119653925s
Jul  1 11:29:44.833: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 22.123535822s
Jul  1 11:29:46.840: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Running", Reason="", readiness=false. Elapsed: 24.130721573s
Jul  1 11:29:48.844: INFO: Pod "pod-subpath-test-secret-tfn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.134474461s
STEP: Saw pod success
Jul  1 11:29:48.844: INFO: Pod "pod-subpath-test-secret-tfn5" satisfied condition "success or failure"
Jul  1 11:29:48.846: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-secret-tfn5 container test-container-subpath-secret-tfn5: 
STEP: delete the pod
Jul  1 11:29:48.893: INFO: Waiting for pod pod-subpath-test-secret-tfn5 to disappear
Jul  1 11:29:48.901: INFO: Pod pod-subpath-test-secret-tfn5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-tfn5
Jul  1 11:29:48.901: INFO: Deleting pod "pod-subpath-test-secret-tfn5" in namespace "e2e-tests-subpath-5g2j5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:29:48.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5g2j5" for this suite.
Jul  1 11:29:54.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:29:55.042: INFO: namespace: e2e-tests-subpath-5g2j5, resource: bindings, ignored listing per whitelist
Jul  1 11:29:55.075: INFO: namespace e2e-tests-subpath-5g2j5 deletion completed in 6.169208858s

• [SLOW TEST:32.515 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:29:55.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:29:55.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-7nrff" to be "success or failure"
Jul  1 11:29:55.187: INFO: Pod "downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346369ms
Jul  1 11:29:57.192: INFO: Pod "downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009292609s
Jul  1 11:29:59.195: INFO: Pod "downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012177968s
STEP: Saw pod success
Jul  1 11:29:59.195: INFO: Pod "downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:29:59.197: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:29:59.221: INFO: Waiting for pod downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006 to disappear
Jul  1 11:29:59.283: INFO: Pod downwardapi-volume-8cb7ccd3-9bf3-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:29:59.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7nrff" for this suite.
Jul  1 11:30:05.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:30:05.445: INFO: namespace: e2e-tests-downward-api-7nrff, resource: bindings, ignored listing per whitelist
Jul  1 11:30:05.452: INFO: namespace e2e-tests-downward-api-7nrff deletion completed in 6.166047688s

• [SLOW TEST:10.377 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:30:05.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul  1 11:30:05.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  1 11:30:05.712: INFO: stderr: ""
Jul  1 11:30:05.712: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:30:05.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xltfk" for this suite.
Jul  1 11:30:11.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:30:11.889: INFO: namespace: e2e-tests-kubectl-xltfk, resource: bindings, ignored listing per whitelist
Jul  1 11:30:11.936: INFO: namespace e2e-tests-kubectl-xltfk deletion completed in 6.220367433s

• [SLOW TEST:6.484 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:30:11.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:30:12.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-sfbh6" to be "success or failure"
Jul  1 11:30:12.048: INFO: Pod "downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.255564ms
Jul  1 11:30:14.053: INFO: Pod "downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018054752s
Jul  1 11:30:16.057: INFO: Pod "downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022125294s
STEP: Saw pod success
Jul  1 11:30:16.057: INFO: Pod "downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:30:16.065: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:30:16.102: INFO: Waiting for pod downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006 to disappear
Jul  1 11:30:16.143: INFO: Pod downwardapi-volume-96c0b071-9bf3-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:30:16.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sfbh6" for this suite.
Jul  1 11:30:22.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:30:22.261: INFO: namespace: e2e-tests-downward-api-sfbh6, resource: bindings, ignored listing per whitelist
Jul  1 11:30:22.302: INFO: namespace e2e-tests-downward-api-sfbh6 deletion completed in 6.138545615s

• [SLOW TEST:10.366 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:30:22.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:30:22.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-xksmr" to be "success or failure"
Jul  1 11:30:22.440: INFO: Pod "downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 25.381802ms
Jul  1 11:30:24.454: INFO: Pod "downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039092679s
Jul  1 11:30:26.460: INFO: Pod "downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044888782s
STEP: Saw pod success
Jul  1 11:30:26.460: INFO: Pod "downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:30:26.464: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:30:26.525: INFO: Waiting for pod downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006 to disappear
Jul  1 11:30:26.533: INFO: Pod downwardapi-volume-9cf255d5-9bf3-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:30:26.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xksmr" for this suite.
Jul  1 11:30:32.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:30:32.813: INFO: namespace: e2e-tests-projected-xksmr, resource: bindings, ignored listing per whitelist
Jul  1 11:30:32.830: INFO: namespace e2e-tests-projected-xksmr deletion completed in 6.289460876s

• [SLOW TEST:10.528 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:30:32.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-a346ef75-9bf3-11e9-9f49-0242ac110006
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a346ef75-9bf3-11e9-9f49-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:30:39.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fw4wq" for this suite.
Jul  1 11:31:03.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:31:03.205: INFO: namespace: e2e-tests-configmap-fw4wq, resource: bindings, ignored listing per whitelist
Jul  1 11:31:03.277: INFO: namespace e2e-tests-configmap-fw4wq deletion completed in 24.159246998s

• [SLOW TEST:30.447 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:31:03.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:31:03.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-vht58" to be "success or failure"
Jul  1 11:31:03.437: INFO: Pod "downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.802078ms
Jul  1 11:31:05.443: INFO: Pod "downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01776124s
Jul  1 11:31:07.448: INFO: Pod "downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022906705s
STEP: Saw pod success
Jul  1 11:31:07.448: INFO: Pod "downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:31:07.452: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:31:07.477: INFO: Waiting for pod downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006 to disappear
Jul  1 11:31:07.490: INFO: Pod downwardapi-volume-b5651df9-9bf3-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:31:07.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vht58" for this suite.
Jul  1 11:31:13.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:31:13.619: INFO: namespace: e2e-tests-downward-api-vht58, resource: bindings, ignored listing per whitelist
Jul  1 11:31:13.640: INFO: namespace e2e-tests-downward-api-vht58 deletion completed in 6.145940192s

• [SLOW TEST:10.363 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:31:13.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-bb955242-9bf3-11e9-9f49-0242ac110006
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-bb955242-9bf3-11e9-9f49-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:32:30.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7kvt7" for this suite.
Jul  1 11:32:52.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:32:52.419: INFO: namespace: e2e-tests-projected-7kvt7, resource: bindings, ignored listing per whitelist
Jul  1 11:32:52.493: INFO: namespace e2e-tests-projected-7kvt7 deletion completed in 22.10073087s

• [SLOW TEST:98.852 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:32:52.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f67e3fd1-9bf3-11e9-9f49-0242ac110006
STEP: Creating configMap with name cm-test-opt-upd-f67e407b-9bf3-11e9-9f49-0242ac110006
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f67e3fd1-9bf3-11e9-9f49-0242ac110006
STEP: Updating configmap cm-test-opt-upd-f67e407b-9bf3-11e9-9f49-0242ac110006
STEP: Creating configMap with name cm-test-opt-create-f67e40aa-9bf3-11e9-9f49-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:34:30.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f9r85" for this suite.
Jul  1 11:34:52.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:34:52.171: INFO: namespace: e2e-tests-configmap-f9r85, resource: bindings, ignored listing per whitelist
Jul  1 11:34:52.202: INFO: namespace e2e-tests-configmap-f9r85 deletion completed in 22.170853591s

• [SLOW TEST:119.709 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:34:52.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3ddb75eb-9bf4-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 11:34:52.401: INFO: Waiting up to 5m0s for pod "pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-zrfds" to be "success or failure"
Jul  1 11:34:52.411: INFO: Pod "pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.994796ms
Jul  1 11:34:54.432: INFO: Pod "pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031536413s
Jul  1 11:34:56.494: INFO: Pod "pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093659778s
STEP: Saw pod success
Jul  1 11:34:56.494: INFO: Pod "pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:34:56.501: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jul  1 11:34:56.544: INFO: Waiting for pod pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006 to disappear
Jul  1 11:34:56.561: INFO: Pod pod-secrets-3ddce688-9bf4-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:34:56.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zrfds" for this suite.
Jul  1 11:35:02.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:35:02.729: INFO: namespace: e2e-tests-secrets-zrfds, resource: bindings, ignored listing per whitelist
Jul  1 11:35:02.732: INFO: namespace e2e-tests-secrets-zrfds deletion completed in 6.165380392s

• [SLOW TEST:10.530 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:35:02.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul  1 11:35:07.399: INFO: Successfully updated pod "pod-update-441afeb0-9bf4-11e9-9f49-0242ac110006"
STEP: verifying the updated pod is in kubernetes
Jul  1 11:35:07.408: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:35:07.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ggtpd" for this suite.
Jul  1 11:35:29.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:35:29.467: INFO: namespace: e2e-tests-pods-ggtpd, resource: bindings, ignored listing per whitelist
Jul  1 11:35:29.517: INFO: namespace e2e-tests-pods-ggtpd deletion completed in 22.103518825s

• [SLOW TEST:26.784 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:35:29.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:36:29.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-htq6b" for this suite.
Jul  1 11:36:51.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:36:51.662: INFO: namespace: e2e-tests-container-probe-htq6b, resource: bindings, ignored listing per whitelist
Jul  1 11:36:51.798: INFO: namespace e2e-tests-container-probe-htq6b deletion completed in 22.164300541s

• [SLOW TEST:82.280 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:36:51.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul  1 11:36:51.945: INFO: Waiting up to 5m0s for pod "var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006" in namespace "e2e-tests-var-expansion-v2flj" to be "success or failure"
Jul  1 11:36:51.965: INFO: Pod "var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.737993ms
Jul  1 11:36:54.059: INFO: Pod "var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113995635s
Jul  1 11:36:56.064: INFO: Pod "var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119085344s
STEP: Saw pod success
Jul  1 11:36:56.064: INFO: Pod "var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:36:56.069: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 11:36:56.116: INFO: Waiting for pod var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006 to disappear
Jul  1 11:36:56.126: INFO: Pod var-expansion-85212e4c-9bf4-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:36:56.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-v2flj" for this suite.
Jul  1 11:37:02.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:37:02.173: INFO: namespace: e2e-tests-var-expansion-v2flj, resource: bindings, ignored listing per whitelist
Jul  1 11:37:02.283: INFO: namespace e2e-tests-var-expansion-v2flj deletion completed in 6.152459918s

• [SLOW TEST:10.485 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:37:02.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:37:06.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7dh4q" for this suite.
Jul  1 11:37:46.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:37:46.674: INFO: namespace: e2e-tests-kubelet-test-7dh4q, resource: bindings, ignored listing per whitelist
Jul  1 11:37:46.770: INFO: namespace e2e-tests-kubelet-test-7dh4q deletion completed in 40.179694495s

• [SLOW TEST:44.487 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:37:46.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-bqzdk
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bqzdk
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bqzdk
Jul  1 11:37:46.894: INFO: Found 0 stateful pods, waiting for 1
Jul  1 11:37:56.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul  1 11:37:56.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  1 11:37:57.226: INFO: stderr: ""
Jul  1 11:37:57.226: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  1 11:37:57.226: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  1 11:37:57.273: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:37:57.273: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:37:57.276: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jul  1 11:38:07.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999637s
Jul  1 11:38:08.305: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994611359s
Jul  1 11:38:09.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988469331s
Jul  1 11:38:10.591: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983816826s
Jul  1 11:38:11.595: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.702067464s
Jul  1 11:38:12.602: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.698011099s
Jul  1 11:38:13.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.691607377s
Jul  1 11:38:14.613: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.685783463s
Jul  1 11:38:15.619: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.680060387s
Jul  1 11:38:16.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 674.487193ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bqzdk
Jul  1 11:38:17.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  1 11:38:17.949: INFO: stderr: ""
Jul  1 11:38:17.949: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  1 11:38:17.949: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  1 11:38:17.957: INFO: Found 1 stateful pods, waiting for 3
Jul  1 11:38:27.965: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:38:27.965: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  1 11:38:27.965: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul  1 11:38:27.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  1 11:38:28.241: INFO: stderr: ""
Jul  1 11:38:28.241: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  1 11:38:28.241: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  1 11:38:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  1 11:38:28.553: INFO: stderr: ""
Jul  1 11:38:28.553: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  1 11:38:28.553: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  1 11:38:28.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  1 11:38:28.805: INFO: stderr: ""
Jul  1 11:38:28.805: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  1 11:38:28.805: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  1 11:38:28.805: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:38:28.809: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  1 11:38:38.819: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:38:38.819: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:38:38.819: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  1 11:38:38.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999418s
Jul  1 11:38:39.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990802574s
Jul  1 11:38:40.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983197511s
Jul  1 11:38:41.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978035524s
Jul  1 11:38:42.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97013366s
Jul  1 11:38:43.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.962898597s
Jul  1 11:38:44.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956996276s
Jul  1 11:38:45.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950583134s
Jul  1 11:38:46.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.902503588s
Jul  1 11:38:47.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 894.248352ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bqzdk
Jul  1 11:38:48.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  1 11:38:49.193: INFO: stderr: ""
Jul  1 11:38:49.193: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  1 11:38:49.193: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  1 11:38:49.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  1 11:38:49.388: INFO: stderr: ""
Jul  1 11:38:49.388: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  1 11:38:49.388: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  1 11:38:49.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bqzdk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  1 11:38:49.607: INFO: stderr: ""
Jul  1 11:38:49.607: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  1 11:38:49.607: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  1 11:38:49.607: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  1 11:38:59.619: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bqzdk
Jul  1 11:38:59.625: INFO: Scaling statefulset ss to 0
Jul  1 11:38:59.636: INFO: Waiting for statefulset status.replicas updated to 0
Jul  1 11:38:59.640: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:38:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-bqzdk" for this suite.
Jul  1 11:39:05.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:39:05.699: INFO: namespace: e2e-tests-statefulset-bqzdk, resource: bindings, ignored listing per whitelist
Jul  1 11:39:05.811: INFO: namespace e2e-tests-statefulset-bqzdk deletion completed in 6.14464928s

• [SLOW TEST:79.041 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:39:05.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  1 11:39:05.980: INFO: Waiting up to 5m0s for pod "pod-d5058505-9bf4-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-lnvcc" to be "success or failure"
Jul  1 11:39:06.002: INFO: Pod "pod-d5058505-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 21.54139ms
Jul  1 11:39:08.005: INFO: Pod "pod-d5058505-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025217034s
Jul  1 11:39:10.009: INFO: Pod "pod-d5058505-9bf4-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02903897s
STEP: Saw pod success
Jul  1 11:39:10.009: INFO: Pod "pod-d5058505-9bf4-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:39:10.012: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-d5058505-9bf4-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 11:39:10.097: INFO: Waiting for pod pod-d5058505-9bf4-11e9-9f49-0242ac110006 to disappear
Jul  1 11:39:10.172: INFO: Pod pod-d5058505-9bf4-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:39:10.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lnvcc" for this suite.
Jul  1 11:39:16.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:39:16.214: INFO: namespace: e2e-tests-emptydir-lnvcc, resource: bindings, ignored listing per whitelist
Jul  1 11:39:16.289: INFO: namespace e2e-tests-emptydir-lnvcc deletion completed in 6.111347249s

• [SLOW TEST:10.478 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:39:16.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  1 11:39:16.482: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  1 11:39:21.487: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:39:22.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qrnj4" for this suite.
Jul  1 11:39:28.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:39:28.719: INFO: namespace: e2e-tests-replication-controller-qrnj4, resource: bindings, ignored listing per whitelist
Jul  1 11:39:28.723: INFO: namespace e2e-tests-replication-controller-qrnj4 deletion completed in 6.121697022s

• [SLOW TEST:12.434 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:39:28.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e29feb00-9bf4-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 11:39:28.810: INFO: Waiting up to 5m0s for pod "pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-zllt5" to be "success or failure"
Jul  1 11:39:28.867: INFO: Pod "pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 57.101304ms
Jul  1 11:39:30.874: INFO: Pod "pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064586082s
Jul  1 11:39:32.879: INFO: Pod "pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069249084s
STEP: Saw pod success
Jul  1 11:39:32.879: INFO: Pod "pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:39:32.883: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006 container secret-env-test: 
STEP: delete the pod
Jul  1 11:39:32.940: INFO: Waiting for pod pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006 to disappear
Jul  1 11:39:32.945: INFO: Pod pod-secrets-e2a0ea63-9bf4-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:39:32.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zllt5" for this suite.
Jul  1 11:39:39.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:39:39.110: INFO: namespace: e2e-tests-secrets-zllt5, resource: bindings, ignored listing per whitelist
Jul  1 11:39:39.179: INFO: namespace e2e-tests-secrets-zllt5 deletion completed in 6.218839283s

• [SLOW TEST:10.456 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:39:39.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:39:39.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-jmnlf" to be "success or failure"
Jul  1 11:39:39.332: INFO: Pod "downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273376ms
Jul  1 11:39:41.338: INFO: Pod "downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010347512s
Jul  1 11:39:43.418: INFO: Pod "downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090335742s
STEP: Saw pod success
Jul  1 11:39:43.418: INFO: Pod "downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:39:43.422: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:39:43.495: INFO: Waiting for pod downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006 to disappear
Jul  1 11:39:43.504: INFO: Pod downwardapi-volume-e8e56685-9bf4-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:39:43.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jmnlf" for this suite.
Jul  1 11:39:49.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:39:49.682: INFO: namespace: e2e-tests-projected-jmnlf, resource: bindings, ignored listing per whitelist
Jul  1 11:39:49.706: INFO: namespace e2e-tests-projected-jmnlf deletion completed in 6.190058783s

• [SLOW TEST:10.527 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:39:49.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul  1 11:39:57.257: INFO: 0 pods remaining
Jul  1 11:39:57.257: INFO: 0 pods has nil DeletionTimestamp
Jul  1 11:39:57.257: INFO: 
STEP: Gathering metrics
W0701 11:39:58.226817       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  1 11:39:58.226: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:39:58.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hw5kf" for this suite.
Jul  1 11:40:04.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:40:04.684: INFO: namespace: e2e-tests-gc-hw5kf, resource: bindings, ignored listing per whitelist
Jul  1 11:40:04.757: INFO: namespace e2e-tests-gc-hw5kf deletion completed in 6.473007874s

• [SLOW TEST:15.051 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:40:04.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n2jcs
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  1 11:40:04.904: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  1 11:40:29.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n2jcs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  1 11:40:29.031: INFO: >>> kubeConfig: /root/.kube/config
Jul  1 11:40:29.201: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:40:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-n2jcs" for this suite.
Jul  1 11:40:41.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:40:41.249: INFO: namespace: e2e-tests-pod-network-test-n2jcs, resource: bindings, ignored listing per whitelist
Jul  1 11:40:41.445: INFO: namespace e2e-tests-pod-network-test-n2jcs deletion completed in 12.239961984s

• [SLOW TEST:36.687 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:40:41.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  1 11:40:41.604: INFO: Number of nodes with available pods: 0
Jul  1 11:40:41.604: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:42.618: INFO: Number of nodes with available pods: 0
Jul  1 11:40:42.619: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:43.618: INFO: Number of nodes with available pods: 0
Jul  1 11:40:43.618: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:44.612: INFO: Number of nodes with available pods: 1
Jul  1 11:40:44.612: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul  1 11:40:44.702: INFO: Number of nodes with available pods: 0
Jul  1 11:40:44.702: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:45.713: INFO: Number of nodes with available pods: 0
Jul  1 11:40:45.713: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:46.713: INFO: Number of nodes with available pods: 0
Jul  1 11:40:46.713: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jul  1 11:40:47.715: INFO: Number of nodes with available pods: 1
Jul  1 11:40:47.715: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dkn5p, will wait for the garbage collector to delete the pods
Jul  1 11:40:47.793: INFO: Deleting DaemonSet.extensions daemon-set took: 10.015206ms
Jul  1 11:40:47.893: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.177011ms
Jul  1 11:40:55.899: INFO: Number of nodes with available pods: 0
Jul  1 11:40:55.899: INFO: Number of running nodes: 0, number of available pods: 0
Jul  1 11:40:55.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dkn5p/daemonsets","resourceVersion":"1848571"},"items":null}

Jul  1 11:40:55.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dkn5p/pods","resourceVersion":"1848571"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:40:55.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dkn5p" for this suite.
Jul  1 11:41:02.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:41:02.486: INFO: namespace: e2e-tests-daemonsets-dkn5p, resource: bindings, ignored listing per whitelist
Jul  1 11:41:02.595: INFO: namespace e2e-tests-daemonsets-dkn5p deletion completed in 6.670572355s

• [SLOW TEST:21.150 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:41:02.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:41:02.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-fcl2j" to be "success or failure"
Jul  1 11:41:02.705: INFO: Pod "downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.633329ms
Jul  1 11:41:04.708: INFO: Pod "downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008959791s
Jul  1 11:41:06.714: INFO: Pod "downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014442969s
STEP: Saw pod success
Jul  1 11:41:06.714: INFO: Pod "downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:41:06.718: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:41:06.762: INFO: Waiting for pod downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006 to disappear
Jul  1 11:41:06.822: INFO: Pod downwardapi-volume-1a96e96f-9bf5-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:41:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fcl2j" for this suite.
Jul  1 11:41:12.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:41:13.030: INFO: namespace: e2e-tests-projected-fcl2j, resource: bindings, ignored listing per whitelist
Jul  1 11:41:13.131: INFO: namespace e2e-tests-projected-fcl2j deletion completed in 6.304938063s

• [SLOW TEST:10.536 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:41:13.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul  1 11:41:13.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:15.183: INFO: stderr: ""
Jul  1 11:41:15.183: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 11:41:15.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:15.285: INFO: stderr: ""
Jul  1 11:41:15.285: INFO: stdout: "update-demo-nautilus-2fm2g update-demo-nautilus-9bpsg "
Jul  1 11:41:15.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fm2g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:15.356: INFO: stderr: ""
Jul  1 11:41:15.356: INFO: stdout: ""
Jul  1 11:41:15.356: INFO: update-demo-nautilus-2fm2g is created but not running
Jul  1 11:41:20.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:20.485: INFO: stderr: ""
Jul  1 11:41:20.485: INFO: stdout: "update-demo-nautilus-2fm2g update-demo-nautilus-9bpsg "
Jul  1 11:41:20.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fm2g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:20.582: INFO: stderr: ""
Jul  1 11:41:20.582: INFO: stdout: "true"
Jul  1 11:41:20.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2fm2g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:20.697: INFO: stderr: ""
Jul  1 11:41:20.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 11:41:20.697: INFO: validating pod update-demo-nautilus-2fm2g
Jul  1 11:41:20.704: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 11:41:20.704: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 11:41:20.704: INFO: update-demo-nautilus-2fm2g is verified up and running
Jul  1 11:41:20.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bpsg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:20.805: INFO: stderr: ""
Jul  1 11:41:20.805: INFO: stdout: "true"
Jul  1 11:41:20.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bpsg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:20.886: INFO: stderr: ""
Jul  1 11:41:20.886: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 11:41:20.886: INFO: validating pod update-demo-nautilus-9bpsg
Jul  1 11:41:20.896: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 11:41:20.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 11:41:20.896: INFO: update-demo-nautilus-9bpsg is verified up and running
STEP: rolling-update to new replication controller
Jul  1 11:41:20.897: INFO: scanned /root for discovery docs: 
Jul  1 11:41:20.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:43.558: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  1 11:41:43.558: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 11:41:43.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:43.685: INFO: stderr: ""
Jul  1 11:41:43.686: INFO: stdout: "update-demo-kitten-cbfg2 update-demo-kitten-hvwn7 update-demo-nautilus-9bpsg "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jul  1 11:41:48.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:48.779: INFO: stderr: ""
Jul  1 11:41:48.779: INFO: stdout: "update-demo-kitten-cbfg2 update-demo-kitten-hvwn7 "
Jul  1 11:41:48.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cbfg2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:48.850: INFO: stderr: ""
Jul  1 11:41:48.850: INFO: stdout: "true"
Jul  1 11:41:48.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cbfg2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:48.918: INFO: stderr: ""
Jul  1 11:41:48.918: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  1 11:41:48.918: INFO: validating pod update-demo-kitten-cbfg2
Jul  1 11:41:48.930: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  1 11:41:48.930: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  1 11:41:48.930: INFO: update-demo-kitten-cbfg2 is verified up and running
Jul  1 11:41:48.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hvwn7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:49.021: INFO: stderr: ""
Jul  1 11:41:49.021: INFO: stdout: "true"
Jul  1 11:41:49.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hvwn7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qkqrh'
Jul  1 11:41:49.087: INFO: stderr: ""
Jul  1 11:41:49.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  1 11:41:49.087: INFO: validating pod update-demo-kitten-hvwn7
Jul  1 11:41:49.091: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  1 11:41:49.091: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  1 11:41:49.091: INFO: update-demo-kitten-hvwn7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:41:49.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qkqrh" for this suite.
Jul  1 11:42:11.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:42:11.317: INFO: namespace: e2e-tests-kubectl-qkqrh, resource: bindings, ignored listing per whitelist
Jul  1 11:42:11.432: INFO: namespace e2e-tests-kubectl-qkqrh deletion completed in 22.338799772s

• [SLOW TEST:58.300 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:42:11.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 11:42:37.655: INFO: Container started at 2019-07-01 11:42:14 +0000 UTC, pod became ready at 2019-07-01 11:42:35 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:42:37.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hwgcb" for this suite.
Jul  1 11:42:59.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:42:59.839: INFO: namespace: e2e-tests-container-probe-hwgcb, resource: bindings, ignored listing per whitelist
Jul  1 11:42:59.846: INFO: namespace e2e-tests-container-probe-hwgcb deletion completed in 22.187652188s

• [SLOW TEST:48.415 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:42:59.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  1 11:42:59.984: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:43:04.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fc7r2" for this suite.
Jul  1 11:43:10.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:43:10.568: INFO: namespace: e2e-tests-init-container-fc7r2, resource: bindings, ignored listing per whitelist
Jul  1 11:43:10.574: INFO: namespace e2e-tests-init-container-fc7r2 deletion completed in 6.142050448s

• [SLOW TEST:10.728 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:43:10.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-66dbcc5f-9bf5-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 11:43:10.711: INFO: Waiting up to 5m0s for pod "pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-lnv99" to be "success or failure"
Jul  1 11:43:10.723: INFO: Pod "pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246302ms
Jul  1 11:43:12.902: INFO: Pod "pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191285184s
Jul  1 11:43:14.910: INFO: Pod "pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199485952s
STEP: Saw pod success
Jul  1 11:43:14.910: INFO: Pod "pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:43:14.915: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jul  1 11:43:15.001: INFO: Waiting for pod pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006 to disappear
Jul  1 11:43:15.005: INFO: Pod pod-secrets-66dc96c1-9bf5-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:43:15.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lnv99" for this suite.
Jul  1 11:43:21.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:43:21.045: INFO: namespace: e2e-tests-secrets-lnv99, resource: bindings, ignored listing per whitelist
Jul  1 11:43:21.117: INFO: namespace e2e-tests-secrets-lnv99 deletion completed in 6.109153383s

• [SLOW TEST:10.542 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:43:21.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 11:43:21.231: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  1 11:43:21.244: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  1 11:43:26.252: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  1 11:43:26.252: INFO: Creating deployment "test-rolling-update-deployment"
Jul  1 11:43:26.261: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  1 11:43:26.283: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  1 11:43:28.289: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  1 11:43:28.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697578206, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697578206, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697578206, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697578206, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 11:43:30.294: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  1 11:43:30.301: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2k7dp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2k7dp/deployments/test-rolling-update-deployment,UID:7027b397-9bf5-11e9-a678-fa163e0cec1d,ResourceVersion:1849061,Generation:1,CreationTimestamp:2019-07-01 11:43:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-07-01 11:43:26 +0000 UTC 2019-07-01 11:43:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-07-01 11:43:29 +0000 UTC 2019-07-01 11:43:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-68b55d7bc6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  1 11:43:30.303: INFO: New ReplicaSet "test-rolling-update-deployment-68b55d7bc6" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6,GenerateName:,Namespace:e2e-tests-deployment-2k7dp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2k7dp/replicasets/test-rolling-update-deployment-68b55d7bc6,UID:702df000-9bf5-11e9-a678-fa163e0cec1d,ResourceVersion:1849052,Generation:1,CreationTimestamp:2019-07-01 11:43:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7027b397-9bf5-11e9-a678-fa163e0cec1d 0xc00171a047 0xc00171a048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  1 11:43:30.303: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  1 11:43:30.303: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2k7dp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2k7dp/replicasets/test-rolling-update-controller,UID:6d295de1-9bf5-11e9-a678-fa163e0cec1d,ResourceVersion:1849060,Generation:2,CreationTimestamp:2019-07-01 11:43:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7027b397-9bf5-11e9-a678-fa163e0cec1d 0xc000553f2f 0xc000553f60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 11:43:30.306: INFO: Pod "test-rolling-update-deployment-68b55d7bc6-cnjbf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6-cnjbf,GenerateName:test-rolling-update-deployment-68b55d7bc6-,Namespace:e2e-tests-deployment-2k7dp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2k7dp/pods/test-rolling-update-deployment-68b55d7bc6-cnjbf,UID:70371a6b-9bf5-11e9-a678-fa163e0cec1d,ResourceVersion:1849051,Generation:0,CreationTimestamp:2019-07-01 11:43:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-68b55d7bc6 702df000-9bf5-11e9-a678-fa163e0cec1d 0xc0012d9337 0xc0012d9338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xnrqp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xnrqp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xnrqp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012d9480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012d94b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:43:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:43:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:43:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:43:26 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-07-01 11:43:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-07-01 11:43:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://17cb1316924c0e3f64bcf7ed35e5b79ef50c208c4b3a6e4456fa5b8f7213f95d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:43:30.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2k7dp" for this suite.
Jul  1 11:43:38.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:43:38.401: INFO: namespace: e2e-tests-deployment-2k7dp, resource: bindings, ignored listing per whitelist
Jul  1 11:43:38.445: INFO: namespace e2e-tests-deployment-2k7dp deletion completed in 8.135892321s

• [SLOW TEST:17.328 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:43:38.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  1 11:43:38.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-j9d9r'
Jul  1 11:43:38.758: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  1 11:43:38.758: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul  1 11:43:42.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-j9d9r'
Jul  1 11:43:42.978: INFO: stderr: ""
Jul  1 11:43:42.978: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:43:42.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j9d9r" for this suite.
Jul  1 11:44:05.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:44:05.160: INFO: namespace: e2e-tests-kubectl-j9d9r, resource: bindings, ignored listing per whitelist
Jul  1 11:44:05.194: INFO: namespace e2e-tests-kubectl-j9d9r deletion completed in 22.210888335s

• [SLOW TEST:26.749 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:44:05.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  1 11:44:05.344: INFO: Waiting up to 5m0s for pod "downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-9tthx" to be "success or failure"
Jul  1 11:44:05.361: INFO: Pod "downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.407877ms
Jul  1 11:44:07.366: INFO: Pod "downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022259768s
Jul  1 11:44:09.372: INFO: Pod "downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028189261s
STEP: Saw pod success
Jul  1 11:44:09.372: INFO: Pod "downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:44:09.375: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 11:44:09.447: INFO: Waiting for pod downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006 to disappear
Jul  1 11:44:09.455: INFO: Pod downward-api-87747ed0-9bf5-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:44:09.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9tthx" for this suite.
Jul  1 11:44:15.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:44:15.492: INFO: namespace: e2e-tests-downward-api-9tthx, resource: bindings, ignored listing per whitelist
Jul  1 11:44:15.647: INFO: namespace e2e-tests-downward-api-9tthx deletion completed in 6.188066343s

• [SLOW TEST:10.453 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:44:15.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul  1 11:44:16.549: INFO: Pod name wrapped-volume-race-8e1cf4bc-9bf5-11e9-9f49-0242ac110006: Found 0 pods out of 5
Jul  1 11:44:21.567: INFO: Pod name wrapped-volume-race-8e1cf4bc-9bf5-11e9-9f49-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8e1cf4bc-9bf5-11e9-9f49-0242ac110006 in namespace e2e-tests-emptydir-wrapper-7s6dh, will wait for the garbage collector to delete the pods
Jul  1 11:46:05.684: INFO: Deleting ReplicationController wrapped-volume-race-8e1cf4bc-9bf5-11e9-9f49-0242ac110006 took: 23.229321ms
Jul  1 11:46:05.885: INFO: Terminating ReplicationController wrapped-volume-race-8e1cf4bc-9bf5-11e9-9f49-0242ac110006 pods took: 200.253381ms
STEP: Creating RC which spawns configmap-volume pods
Jul  1 11:46:46.260: INFO: Pod name wrapped-volume-race-e7541bdb-9bf5-11e9-9f49-0242ac110006: Found 0 pods out of 5
Jul  1 11:46:51.813: INFO: Pod name wrapped-volume-race-e7541bdb-9bf5-11e9-9f49-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e7541bdb-9bf5-11e9-9f49-0242ac110006 in namespace e2e-tests-emptydir-wrapper-7s6dh, will wait for the garbage collector to delete the pods
Jul  1 11:49:07.944: INFO: Deleting ReplicationController wrapped-volume-race-e7541bdb-9bf5-11e9-9f49-0242ac110006 took: 26.931981ms
Jul  1 11:49:08.145: INFO: Terminating ReplicationController wrapped-volume-race-e7541bdb-9bf5-11e9-9f49-0242ac110006 pods took: 200.27108ms
STEP: Creating RC which spawns configmap-volume pods
Jul  1 11:49:46.956: INFO: Pod name wrapped-volume-race-5302a775-9bf6-11e9-9f49-0242ac110006: Found 0 pods out of 5
Jul  1 11:49:51.968: INFO: Pod name wrapped-volume-race-5302a775-9bf6-11e9-9f49-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5302a775-9bf6-11e9-9f49-0242ac110006 in namespace e2e-tests-emptydir-wrapper-7s6dh, will wait for the garbage collector to delete the pods
Jul  1 11:52:28.147: INFO: Deleting ReplicationController wrapped-volume-race-5302a775-9bf6-11e9-9f49-0242ac110006 took: 81.438818ms
Jul  1 11:52:28.347: INFO: Terminating ReplicationController wrapped-volume-race-5302a775-9bf6-11e9-9f49-0242ac110006 pods took: 200.251153ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:53:07.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-7s6dh" for this suite.
Jul  1 11:53:16.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:53:16.032: INFO: namespace: e2e-tests-emptydir-wrapper-7s6dh, resource: bindings, ignored listing per whitelist
Jul  1 11:53:16.169: INFO: namespace e2e-tests-emptydir-wrapper-7s6dh deletion completed in 8.177387901s

• [SLOW TEST:540.522 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:53:16.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  1 11:53:16.329: INFO: Waiting up to 5m0s for pod "downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-zrqjx" to be "success or failure"
Jul  1 11:53:16.338: INFO: Pod "downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74896ms
Jul  1 11:53:18.374: INFO: Pod "downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045157558s
Jul  1 11:53:20.380: INFO: Pod "downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050672684s
STEP: Saw pod success
Jul  1 11:53:20.380: INFO: Pod "downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:53:20.383: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 11:53:20.493: INFO: Waiting for pod downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006 to disappear
Jul  1 11:53:20.501: INFO: Pod downward-api-cfdd8757-9bf6-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:53:20.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zrqjx" for this suite.
Jul  1 11:53:26.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:53:26.540: INFO: namespace: e2e-tests-downward-api-zrqjx, resource: bindings, ignored listing per whitelist
Jul  1 11:53:26.719: INFO: namespace e2e-tests-downward-api-zrqjx deletion completed in 6.212244503s

• [SLOW TEST:10.549 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:53:26.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  1 11:53:26.886: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 11:53:26.896: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 11:53:26.899: INFO: 
Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test
Jul  1 11:53:26.907: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 11:53:26.907: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded)
Jul  1 11:53:26.907: INFO: 	Container weave ready: true, restart count 0
Jul  1 11:53:26.907: INFO: 	Container weave-npc ready: true, restart count 0
Jul  1 11:53:26.907: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jul  1 11:53:26.907: INFO: 	Container coredns ready: true, restart count 0
Jul  1 11:53:26.907: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 11:53:26.907: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jul  1 11:53:26.907: INFO: 	Container coredns ready: true, restart count 0
Jul  1 11:53:26.907: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded)
Jul  1 11:53:26.907: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 11:53:26.907: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 11:53:26.907: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ad465c0bf5e2b0], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:53:27.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-xhstb" for this suite.
Jul  1 11:53:33.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:53:33.993: INFO: namespace: e2e-tests-sched-pred-xhstb, resource: bindings, ignored listing per whitelist
Jul  1 11:53:34.075: INFO: namespace e2e-tests-sched-pred-xhstb deletion completed in 6.128295094s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.356 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:53:34.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-da8be74b-9bf6-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 11:53:34.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-gm57l" to be "success or failure"
Jul  1 11:53:34.277: INFO: Pod "pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.302015ms
Jul  1 11:53:36.282: INFO: Pod "pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014356757s
Jul  1 11:53:38.306: INFO: Pod "pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038035291s
STEP: Saw pod success
Jul  1 11:53:38.306: INFO: Pod "pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:53:38.310: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jul  1 11:53:38.372: INFO: Waiting for pod pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006 to disappear
Jul  1 11:53:38.377: INFO: Pod pod-projected-secrets-da8e5a5d-9bf6-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:53:38.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gm57l" for this suite.
Jul  1 11:53:44.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:53:44.479: INFO: namespace: e2e-tests-projected-gm57l, resource: bindings, ignored listing per whitelist
Jul  1 11:53:44.553: INFO: namespace e2e-tests-projected-gm57l deletion completed in 6.170920617s

• [SLOW TEST:10.478 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:53:44.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul  1 11:53:51.766: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:53:51.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-9h68v" for this suite.
Jul  1 11:54:15.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:54:15.959: INFO: namespace: e2e-tests-replicaset-9h68v, resource: bindings, ignored listing per whitelist
Jul  1 11:54:16.055: INFO: namespace e2e-tests-replicaset-9h68v deletion completed in 24.18408394s

• [SLOW TEST:31.501 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:54:16.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jul  1 11:54:20.260: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:54:44.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-m4l95" for this suite.
Jul  1 11:54:50.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:54:50.440: INFO: namespace: e2e-tests-namespaces-m4l95, resource: bindings, ignored listing per whitelist
Jul  1 11:54:50.497: INFO: namespace e2e-tests-namespaces-m4l95 deletion completed in 6.097483392s
STEP: Destroying namespace "e2e-tests-nsdeletetest-9vcgs" for this suite.
Jul  1 11:54:50.498: INFO: Namespace e2e-tests-nsdeletetest-9vcgs was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-znlg6" for this suite.
Jul  1 11:54:56.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:54:56.588: INFO: namespace: e2e-tests-nsdeletetest-znlg6, resource: bindings, ignored listing per whitelist
Jul  1 11:54:56.643: INFO: namespace e2e-tests-nsdeletetest-znlg6 deletion completed in 6.144261701s

• [SLOW TEST:40.588 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:54:56.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  1 11:54:56.847: INFO: Waiting up to 5m0s for pod "pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-6xnvn" to be "success or failure"
Jul  1 11:54:56.852: INFO: Pod "pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673429ms
Jul  1 11:54:58.855: INFO: Pod "pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008376418s
Jul  1 11:55:00.861: INFO: Pod "pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014075827s
STEP: Saw pod success
Jul  1 11:55:00.861: INFO: Pod "pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:55:00.866: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 11:55:00.904: INFO: Waiting for pod pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006 to disappear
Jul  1 11:55:00.921: INFO: Pod pod-0bc7b15d-9bf7-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:55:00.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6xnvn" for this suite.
Jul  1 11:55:06.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:55:07.081: INFO: namespace: e2e-tests-emptydir-6xnvn, resource: bindings, ignored listing per whitelist
Jul  1 11:55:07.127: INFO: namespace e2e-tests-emptydir-6xnvn deletion completed in 6.19718233s

• [SLOW TEST:10.484 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:55:07.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 11:55:07.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-mrsn9" to be "success or failure"
Jul  1 11:55:07.293: INFO: Pod "downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.88189ms
Jul  1 11:55:09.301: INFO: Pod "downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027126735s
Jul  1 11:55:11.305: INFO: Pod "downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031039727s
STEP: Saw pod success
Jul  1 11:55:11.305: INFO: Pod "downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:55:11.308: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 11:55:11.341: INFO: Waiting for pod downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006 to disappear
Jul  1 11:55:11.354: INFO: Pod downwardapi-volume-11fec297-9bf7-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:55:11.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mrsn9" for this suite.
Jul  1 11:55:17.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:55:17.481: INFO: namespace: e2e-tests-projected-mrsn9, resource: bindings, ignored listing per whitelist
Jul  1 11:55:17.566: INFO: namespace e2e-tests-projected-mrsn9 deletion completed in 6.201913516s

• [SLOW TEST:10.439 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:55:17.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 11:55:17.688: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul  1 11:55:22.694: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  1 11:55:22.694: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  1 11:55:22.736: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-wwvjd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wwvjd/deployments/test-cleanup-deployment,UID:1b31027b-9bf7-11e9-a678-fa163e0cec1d,ResourceVersion:1850680,Generation:1,CreationTimestamp:2019-07-01 11:55:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jul  1 11:55:22.844: INFO: New ReplicaSet "test-cleanup-deployment-7dbbfcf846" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-7dbbfcf846,GenerateName:,Namespace:e2e-tests-deployment-wwvjd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wwvjd/replicasets/test-cleanup-deployment-7dbbfcf846,UID:1b35a703-9bf7-11e9-a678-fa163e0cec1d,ResourceVersion:1850688,Generation:1,CreationTimestamp:2019-07-01 11:55:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1b31027b-9bf7-11e9-a678-fa163e0cec1d 0xc001418b17 0xc001418b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 11:55:22.844: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul  1 11:55:22.844: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-wwvjd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wwvjd/replicasets/test-cleanup-controller,UID:1832c67c-9bf7-11e9-a678-fa163e0cec1d,ResourceVersion:1850681,Generation:1,CreationTimestamp:2019-07-01 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1b31027b-9bf7-11e9-a678-fa163e0cec1d 0xc0014189c7 0xc0014189c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  1 11:55:22.859: INFO: Pod "test-cleanup-controller-xrd45" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xrd45,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-wwvjd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wwvjd/pods/test-cleanup-controller-xrd45,UID:1834e4d0-9bf7-11e9-a678-fa163e0cec1d,ResourceVersion:1850675,Generation:0,CreationTimestamp:2019-07-01 11:55:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 1832c67c-9bf7-11e9-a678-fa163e0cec1d 0xc001419517 0xc001419518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zq856 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zq856,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zq856 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014195b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001419620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:55:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:55:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:55:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:55:17 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.4,StartTime:2019-07-01 11:55:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 11:55:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://24703168a4225ac24f9f99156054df0776f9ad98ea9f59865c56a382eba95054}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 11:55:22.859: INFO: Pod "test-cleanup-deployment-7dbbfcf846-dx84w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-7dbbfcf846-dx84w,GenerateName:test-cleanup-deployment-7dbbfcf846-,Namespace:e2e-tests-deployment-wwvjd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wwvjd/pods/test-cleanup-deployment-7dbbfcf846-dx84w,UID:1b36916f-9bf7-11e9-a678-fa163e0cec1d,ResourceVersion:1850686,Generation:0,CreationTimestamp:2019-07-01 11:55:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 7dbbfcf846,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-7dbbfcf846 1b35a703-9bf7-11e9-a678-fa163e0cec1d 0xc001419887 0xc001419888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zq856 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zq856,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-zq856 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014199b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014199e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 11:55:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:55:22.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wwvjd" for this suite.
Jul  1 11:55:28.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:55:29.019: INFO: namespace: e2e-tests-deployment-wwvjd, resource: bindings, ignored listing per whitelist
Jul  1 11:55:29.020: INFO: namespace e2e-tests-deployment-wwvjd deletion completed in 6.139481894s

• [SLOW TEST:11.454 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:55:29.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1f09b678-9bf7-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 11:55:29.284: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-ldfhv" to be "success or failure"
Jul  1 11:55:29.293: INFO: Pod "pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.159201ms
Jul  1 11:55:31.308: INFO: Pod "pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02388077s
Jul  1 11:55:33.313: INFO: Pod "pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029278424s
STEP: Saw pod success
Jul  1 11:55:33.313: INFO: Pod "pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 11:55:33.318: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 11:55:33.440: INFO: Waiting for pod pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006 to disappear
Jul  1 11:55:33.444: INFO: Pod pod-projected-configmaps-1f0c3565-9bf7-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:55:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ldfhv" for this suite.
Jul  1 11:55:39.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:55:39.512: INFO: namespace: e2e-tests-projected-ldfhv, resource: bindings, ignored listing per whitelist
Jul  1 11:55:39.590: INFO: namespace e2e-tests-projected-ldfhv deletion completed in 6.139573951s

• [SLOW TEST:10.569 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:55:39.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ntsjl
Jul  1 11:55:43.784: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ntsjl
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 11:55:43.788: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:59:44.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ntsjl" for this suite.
Jul  1 11:59:50.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 11:59:50.995: INFO: namespace: e2e-tests-container-probe-ntsjl, resource: bindings, ignored listing per whitelist
Jul  1 11:59:51.047: INFO: namespace e2e-tests-container-probe-ntsjl deletion completed in 6.076216864s

• [SLOW TEST:251.458 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 11:59:51.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  1 11:59:55.750: INFO: Successfully updated pod "labelsupdatebb3a4c95-9bf7-11e9-9f49-0242ac110006"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 11:59:57.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jxvp8" for this suite.
Jul  1 12:00:19.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:00:19.881: INFO: namespace: e2e-tests-downward-api-jxvp8, resource: bindings, ignored listing per whitelist
Jul  1 12:00:20.001: INFO: namespace e2e-tests-downward-api-jxvp8 deletion completed in 22.213025451s

• [SLOW TEST:28.954 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:00:20.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  1 12:00:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5mtsv'
Jul  1 12:00:21.619: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  1 12:00:21.619: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul  1 12:00:23.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-5mtsv'
Jul  1 12:00:23.811: INFO: stderr: ""
Jul  1 12:00:23.811: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:00:23.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5mtsv" for this suite.
Jul  1 12:02:21.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:02:21.971: INFO: namespace: e2e-tests-kubectl-5mtsv, resource: bindings, ignored listing per whitelist
Jul  1 12:02:21.976: INFO: namespace e2e-tests-kubectl-5mtsv deletion completed in 1m58.122623859s

• [SLOW TEST:121.975 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:02:21.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:02:22.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-bzgnm" to be "success or failure"
Jul  1 12:02:22.143: INFO: Pod "downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.004347ms
Jul  1 12:02:24.146: INFO: Pod "downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012352026s
Jul  1 12:02:26.151: INFO: Pod "downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01715488s
STEP: Saw pod success
Jul  1 12:02:26.151: INFO: Pod "downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:02:26.153: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:02:26.184: INFO: Waiting for pod downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006 to disappear
Jul  1 12:02:26.189: INFO: Pod downwardapi-volume-152fba6a-9bf8-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:02:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bzgnm" for this suite.
Jul  1 12:02:32.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:02:32.245: INFO: namespace: e2e-tests-projected-bzgnm, resource: bindings, ignored listing per whitelist
Jul  1 12:02:32.304: INFO: namespace e2e-tests-projected-bzgnm deletion completed in 6.107260706s

• [SLOW TEST:10.328 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:02:32.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  1 12:02:32.407: INFO: Waiting up to 5m0s for pod "pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-9r9ns" to be "success or failure"
Jul  1 12:02:32.492: INFO: Pod "pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 84.603124ms
Jul  1 12:02:34.495: INFO: Pod "pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08801427s
Jul  1 12:02:36.505: INFO: Pod "pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097438498s
STEP: Saw pod success
Jul  1 12:02:36.505: INFO: Pod "pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:02:36.509: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:02:36.565: INFO: Waiting for pod pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006 to disappear
Jul  1 12:02:36.603: INFO: Pod pod-1b4fc08b-9bf8-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:02:36.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9r9ns" for this suite.
Jul  1 12:02:42.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:02:42.660: INFO: namespace: e2e-tests-emptydir-9r9ns, resource: bindings, ignored listing per whitelist
Jul  1 12:02:42.721: INFO: namespace e2e-tests-emptydir-9r9ns deletion completed in 6.111687764s

• [SLOW TEST:10.417 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:02:42.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:03:09.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-k67nx" for this suite.
Jul  1 12:03:15.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:03:15.669: INFO: namespace: e2e-tests-container-runtime-k67nx, resource: bindings, ignored listing per whitelist
Jul  1 12:03:15.757: INFO: namespace e2e-tests-container-runtime-k67nx deletion completed in 6.171390925s

• [SLOW TEST:33.036 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:03:15.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  1 12:03:15.913: INFO: PodSpec: initContainers in spec.initContainers
Jul  1 12:04:08.722: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-354095b1-9bf8-11e9-9f49-0242ac110006", GenerateName:"", Namespace:"e2e-tests-init-container-vtmdh", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vtmdh/pods/pod-init-354095b1-9bf8-11e9-9f49-0242ac110006", UID:"35407709-9bf8-11e9-a678-fa163e0cec1d", ResourceVersion:"1851612", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63697579395, loc:(*time.Location)(0x7947a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"913060569"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-45fbq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001870300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-45fbq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-45fbq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-45fbq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d1e368), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-x6tdbol33slm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027b60c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d1e3e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d1e400)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000d1e408), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000d1e40c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697579396, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697579396, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697579396, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697579395, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.12", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0015b6240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00174a150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00174a1c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://7a043ffd927cc867a38c435d94ff9594854ccf1370287c665701a78cb1830f34"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0015b6280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0015b6260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:04:08.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vtmdh" for this suite.
Jul  1 12:04:30.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:04:30.877: INFO: namespace: e2e-tests-init-container-vtmdh, resource: bindings, ignored listing per whitelist
Jul  1 12:04:30.880: INFO: namespace e2e-tests-init-container-vtmdh deletion completed in 22.125599322s

• [SLOW TEST:75.123 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:04:30.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-5ddsn
I0701 12:04:31.024149       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-5ddsn, replica count: 1
I0701 12:04:32.074453       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:04:33.074638       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0701 12:04:34.074870       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  1 12:04:34.221: INFO: Created: latency-svc-wc2xk
Jul  1 12:04:34.256: INFO: Got endpoints: latency-svc-wc2xk [81.130157ms]
Jul  1 12:04:34.371: INFO: Created: latency-svc-48ncq
Jul  1 12:04:34.388: INFO: Got endpoints: latency-svc-48ncq [132.329204ms]
Jul  1 12:04:34.436: INFO: Created: latency-svc-2xrll
Jul  1 12:04:34.444: INFO: Got endpoints: latency-svc-2xrll [187.863435ms]
Jul  1 12:04:34.597: INFO: Created: latency-svc-vg928
Jul  1 12:04:34.604: INFO: Got endpoints: latency-svc-vg928 [347.814321ms]
Jul  1 12:04:34.635: INFO: Created: latency-svc-7p89p
Jul  1 12:04:34.641: INFO: Got endpoints: latency-svc-7p89p [384.39357ms]
Jul  1 12:04:34.792: INFO: Created: latency-svc-pqwnv
Jul  1 12:04:34.798: INFO: Got endpoints: latency-svc-pqwnv [541.723411ms]
Jul  1 12:04:34.842: INFO: Created: latency-svc-jgmt7
Jul  1 12:04:35.019: INFO: Got endpoints: latency-svc-jgmt7 [762.449827ms]
Jul  1 12:04:35.028: INFO: Created: latency-svc-ndkpc
Jul  1 12:04:35.040: INFO: Got endpoints: latency-svc-ndkpc [783.451085ms]
Jul  1 12:04:35.071: INFO: Created: latency-svc-76p67
Jul  1 12:04:35.076: INFO: Got endpoints: latency-svc-76p67 [818.773699ms]
Jul  1 12:04:35.225: INFO: Created: latency-svc-dvxqw
Jul  1 12:04:35.225: INFO: Got endpoints: latency-svc-dvxqw [968.361858ms]
Jul  1 12:04:35.271: INFO: Created: latency-svc-mwwnp
Jul  1 12:04:35.273: INFO: Got endpoints: latency-svc-mwwnp [1.016630673s]
Jul  1 12:04:35.315: INFO: Created: latency-svc-q7b84
Jul  1 12:04:35.318: INFO: Got endpoints: latency-svc-q7b84 [1.06093789s]
Jul  1 12:04:35.443: INFO: Created: latency-svc-h62bb
Jul  1 12:04:35.443: INFO: Got endpoints: latency-svc-h62bb [1.186233193s]
Jul  1 12:04:35.492: INFO: Created: latency-svc-mbps5
Jul  1 12:04:35.504: INFO: Got endpoints: latency-svc-mbps5 [1.247096108s]
Jul  1 12:04:35.536: INFO: Created: latency-svc-5gqk9
Jul  1 12:04:35.642: INFO: Got endpoints: latency-svc-5gqk9 [1.385821034s]
Jul  1 12:04:35.655: INFO: Created: latency-svc-m286s
Jul  1 12:04:35.660: INFO: Got endpoints: latency-svc-m286s [1.403020163s]
Jul  1 12:04:35.715: INFO: Created: latency-svc-dhwkz
Jul  1 12:04:35.718: INFO: Got endpoints: latency-svc-dhwkz [1.329759867s]
Jul  1 12:04:35.865: INFO: Created: latency-svc-62wms
Jul  1 12:04:35.870: INFO: Got endpoints: latency-svc-62wms [1.426478788s]
Jul  1 12:04:35.930: INFO: Created: latency-svc-wmw8h
Jul  1 12:04:35.950: INFO: Got endpoints: latency-svc-wmw8h [1.345853109s]
Jul  1 12:04:36.093: INFO: Created: latency-svc-h28n7
Jul  1 12:04:36.103: INFO: Got endpoints: latency-svc-h28n7 [1.461187635s]
Jul  1 12:04:36.180: INFO: Created: latency-svc-6hdj8
Jul  1 12:04:36.185: INFO: Got endpoints: latency-svc-6hdj8 [1.386773841s]
Jul  1 12:04:36.393: INFO: Created: latency-svc-n5tlc
Jul  1 12:04:36.394: INFO: Got endpoints: latency-svc-n5tlc [1.374484093s]
Jul  1 12:04:36.456: INFO: Created: latency-svc-sx2wz
Jul  1 12:04:36.458: INFO: Got endpoints: latency-svc-sx2wz [1.418163657s]
Jul  1 12:04:36.551: INFO: Created: latency-svc-rq6sb
Jul  1 12:04:36.557: INFO: Got endpoints: latency-svc-rq6sb [1.481562215s]
Jul  1 12:04:36.601: INFO: Created: latency-svc-l6bsf
Jul  1 12:04:36.603: INFO: Got endpoints: latency-svc-l6bsf [1.377500282s]
Jul  1 12:04:36.752: INFO: Created: latency-svc-z529g
Jul  1 12:04:36.755: INFO: Got endpoints: latency-svc-z529g [1.481781319s]
Jul  1 12:04:36.768: INFO: Created: latency-svc-q2d27
Jul  1 12:04:36.784: INFO: Got endpoints: latency-svc-q2d27 [1.46598125s]
Jul  1 12:04:36.822: INFO: Created: latency-svc-z978g
Jul  1 12:04:36.827: INFO: Got endpoints: latency-svc-z978g [1.383291266s]
Jul  1 12:04:36.943: INFO: Created: latency-svc-fbhxm
Jul  1 12:04:36.949: INFO: Got endpoints: latency-svc-fbhxm [1.445103771s]
Jul  1 12:04:37.005: INFO: Created: latency-svc-t4cx6
Jul  1 12:04:37.009: INFO: Got endpoints: latency-svc-t4cx6 [1.366660188s]
Jul  1 12:04:37.185: INFO: Created: latency-svc-9m4lf
Jul  1 12:04:37.189: INFO: Got endpoints: latency-svc-9m4lf [1.529550039s]
Jul  1 12:04:37.242: INFO: Created: latency-svc-4vwls
Jul  1 12:04:37.260: INFO: Got endpoints: latency-svc-4vwls [1.542008136s]
Jul  1 12:04:37.343: INFO: Created: latency-svc-jwqsj
Jul  1 12:04:37.346: INFO: Got endpoints: latency-svc-jwqsj [1.476011006s]
Jul  1 12:04:37.390: INFO: Created: latency-svc-ng9ht
Jul  1 12:04:37.397: INFO: Got endpoints: latency-svc-ng9ht [1.447258782s]
Jul  1 12:04:37.581: INFO: Created: latency-svc-fmm9s
Jul  1 12:04:37.585: INFO: Got endpoints: latency-svc-fmm9s [1.481846737s]
Jul  1 12:04:37.650: INFO: Created: latency-svc-gg7px
Jul  1 12:04:37.797: INFO: Got endpoints: latency-svc-gg7px [1.6121937s]
Jul  1 12:04:37.822: INFO: Created: latency-svc-gpcqt
Jul  1 12:04:37.856: INFO: Got endpoints: latency-svc-gpcqt [1.462117074s]
Jul  1 12:04:37.865: INFO: Created: latency-svc-2gpt8
Jul  1 12:04:37.871: INFO: Got endpoints: latency-svc-2gpt8 [1.412406988s]
Jul  1 12:04:37.983: INFO: Created: latency-svc-xdlk8
Jul  1 12:04:37.985: INFO: Got endpoints: latency-svc-xdlk8 [1.42805811s]
Jul  1 12:04:38.013: INFO: Created: latency-svc-qgzh7
Jul  1 12:04:38.020: INFO: Got endpoints: latency-svc-qgzh7 [1.416788908s]
Jul  1 12:04:38.051: INFO: Created: latency-svc-pv9dg
Jul  1 12:04:38.052: INFO: Got endpoints: latency-svc-pv9dg [1.296547437s]
Jul  1 12:04:38.229: INFO: Created: latency-svc-95rvx
Jul  1 12:04:38.238: INFO: Got endpoints: latency-svc-95rvx [1.453661038s]
Jul  1 12:04:38.271: INFO: Created: latency-svc-xrgsz
Jul  1 12:04:38.278: INFO: Got endpoints: latency-svc-xrgsz [1.451135756s]
Jul  1 12:04:38.323: INFO: Created: latency-svc-2lczc
Jul  1 12:04:38.323: INFO: Got endpoints: latency-svc-2lczc [1.37349259s]
Jul  1 12:04:38.460: INFO: Created: latency-svc-ttvb2
Jul  1 12:04:38.465: INFO: Got endpoints: latency-svc-ttvb2 [1.455558711s]
Jul  1 12:04:38.512: INFO: Created: latency-svc-86zm4
Jul  1 12:04:38.516: INFO: Got endpoints: latency-svc-86zm4 [1.326345471s]
Jul  1 12:04:38.545: INFO: Created: latency-svc-84tts
Jul  1 12:04:38.632: INFO: Got endpoints: latency-svc-84tts [1.371272931s]
Jul  1 12:04:38.635: INFO: Created: latency-svc-f722x
Jul  1 12:04:38.641: INFO: Got endpoints: latency-svc-f722x [1.295103073s]
Jul  1 12:04:38.686: INFO: Created: latency-svc-bnhwg
Jul  1 12:04:38.686: INFO: Got endpoints: latency-svc-bnhwg [1.288259693s]
Jul  1 12:04:38.736: INFO: Created: latency-svc-lrzqn
Jul  1 12:04:38.906: INFO: Got endpoints: latency-svc-lrzqn [1.321464921s]
Jul  1 12:04:38.914: INFO: Created: latency-svc-lrrnz
Jul  1 12:04:38.930: INFO: Got endpoints: latency-svc-lrrnz [1.132590479s]
Jul  1 12:04:38.977: INFO: Created: latency-svc-zrjb4
Jul  1 12:04:38.990: INFO: Got endpoints: latency-svc-zrjb4 [1.134348689s]
Jul  1 12:04:39.233: INFO: Created: latency-svc-nh9vw
Jul  1 12:04:39.240: INFO: Got endpoints: latency-svc-nh9vw [1.368588869s]
Jul  1 12:04:39.279: INFO: Created: latency-svc-kcn5m
Jul  1 12:04:39.284: INFO: Got endpoints: latency-svc-kcn5m [1.299023544s]
Jul  1 12:04:39.398: INFO: Created: latency-svc-bpbt4
Jul  1 12:04:39.403: INFO: Got endpoints: latency-svc-bpbt4 [1.382878157s]
Jul  1 12:04:39.445: INFO: Created: latency-svc-9rvfh
Jul  1 12:04:39.455: INFO: Got endpoints: latency-svc-9rvfh [1.403106708s]
Jul  1 12:04:39.490: INFO: Created: latency-svc-7b9xc
Jul  1 12:04:39.587: INFO: Got endpoints: latency-svc-7b9xc [1.349535939s]
Jul  1 12:04:39.594: INFO: Created: latency-svc-8zstm
Jul  1 12:04:39.599: INFO: Got endpoints: latency-svc-8zstm [1.321348154s]
Jul  1 12:04:39.644: INFO: Created: latency-svc-bzl4t
Jul  1 12:04:39.646: INFO: Got endpoints: latency-svc-bzl4t [1.323466928s]
Jul  1 12:04:39.673: INFO: Created: latency-svc-42lr5
Jul  1 12:04:39.675: INFO: Got endpoints: latency-svc-42lr5 [1.210336516s]
Jul  1 12:04:39.811: INFO: Created: latency-svc-p4wdj
Jul  1 12:04:39.813: INFO: Got endpoints: latency-svc-p4wdj [1.29774355s]
Jul  1 12:04:39.849: INFO: Created: latency-svc-t46hc
Jul  1 12:04:39.854: INFO: Got endpoints: latency-svc-t46hc [1.222542043s]
Jul  1 12:04:39.888: INFO: Created: latency-svc-22vts
Jul  1 12:04:39.989: INFO: Got endpoints: latency-svc-22vts [1.34724745s]
Jul  1 12:04:40.025: INFO: Created: latency-svc-5fck8
Jul  1 12:04:40.025: INFO: Got endpoints: latency-svc-5fck8 [1.339655496s]
Jul  1 12:04:40.065: INFO: Created: latency-svc-4vl67
Jul  1 12:04:40.065: INFO: Got endpoints: latency-svc-4vl67 [1.159047792s]
Jul  1 12:04:40.094: INFO: Created: latency-svc-gtzcl
Jul  1 12:04:40.242: INFO: Got endpoints: latency-svc-gtzcl [1.311573117s]
Jul  1 12:04:40.251: INFO: Created: latency-svc-xwmsb
Jul  1 12:04:40.270: INFO: Got endpoints: latency-svc-xwmsb [1.279053874s]
Jul  1 12:04:40.321: INFO: Created: latency-svc-clkp9
Jul  1 12:04:40.437: INFO: Got endpoints: latency-svc-clkp9 [1.197665923s]
Jul  1 12:04:40.449: INFO: Created: latency-svc-6x84p
Jul  1 12:04:40.454: INFO: Got endpoints: latency-svc-6x84p [1.169376738s]
Jul  1 12:04:40.517: INFO: Created: latency-svc-wpksx
Jul  1 12:04:40.525: INFO: Got endpoints: latency-svc-wpksx [1.121879116s]
Jul  1 12:04:40.619: INFO: Created: latency-svc-8wh8q
Jul  1 12:04:40.625: INFO: Got endpoints: latency-svc-8wh8q [1.170109665s]
Jul  1 12:04:40.667: INFO: Created: latency-svc-7qf6k
Jul  1 12:04:40.678: INFO: Got endpoints: latency-svc-7qf6k [1.090810597s]
Jul  1 12:04:40.737: INFO: Created: latency-svc-h6dwf
Jul  1 12:04:40.857: INFO: Got endpoints: latency-svc-h6dwf [1.257512766s]
Jul  1 12:04:40.870: INFO: Created: latency-svc-8q77m
Jul  1 12:04:40.874: INFO: Got endpoints: latency-svc-8q77m [1.22752939s]
Jul  1 12:04:40.918: INFO: Created: latency-svc-44d7g
Jul  1 12:04:40.921: INFO: Got endpoints: latency-svc-44d7g [1.246261474s]
Jul  1 12:04:41.048: INFO: Created: latency-svc-lrbm5
Jul  1 12:04:41.048: INFO: Got endpoints: latency-svc-lrbm5 [1.235016055s]
Jul  1 12:04:41.083: INFO: Created: latency-svc-gsksc
Jul  1 12:04:41.090: INFO: Got endpoints: latency-svc-gsksc [1.235978331s]
Jul  1 12:04:41.138: INFO: Created: latency-svc-vqx6r
Jul  1 12:04:41.260: INFO: Got endpoints: latency-svc-vqx6r [1.270918154s]
Jul  1 12:04:41.271: INFO: Created: latency-svc-hdtqp
Jul  1 12:04:41.286: INFO: Got endpoints: latency-svc-hdtqp [1.260726549s]
Jul  1 12:04:41.363: INFO: Created: latency-svc-6r26d
Jul  1 12:04:41.455: INFO: Got endpoints: latency-svc-6r26d [1.389937796s]
Jul  1 12:04:41.460: INFO: Created: latency-svc-82ndw
Jul  1 12:04:41.462: INFO: Got endpoints: latency-svc-82ndw [1.220188979s]
Jul  1 12:04:41.525: INFO: Created: latency-svc-5bssk
Jul  1 12:04:41.528: INFO: Got endpoints: latency-svc-5bssk [1.258777264s]
Jul  1 12:04:41.715: INFO: Created: latency-svc-zqbsk
Jul  1 12:04:41.726: INFO: Got endpoints: latency-svc-zqbsk [1.288616257s]
Jul  1 12:04:41.770: INFO: Created: latency-svc-ln9vf
Jul  1 12:04:41.777: INFO: Got endpoints: latency-svc-ln9vf [1.322805796s]
Jul  1 12:04:41.941: INFO: Created: latency-svc-lfmb9
Jul  1 12:04:41.941: INFO: Got endpoints: latency-svc-lfmb9 [1.416029136s]
Jul  1 12:04:41.987: INFO: Created: latency-svc-b772k
Jul  1 12:04:41.991: INFO: Got endpoints: latency-svc-b772k [1.36523578s]
Jul  1 12:04:42.102: INFO: Created: latency-svc-sgh6m
Jul  1 12:04:42.105: INFO: Got endpoints: latency-svc-sgh6m [1.426856472s]
Jul  1 12:04:42.166: INFO: Created: latency-svc-mbfxl
Jul  1 12:04:42.172: INFO: Got endpoints: latency-svc-mbfxl [1.315569357s]
Jul  1 12:04:42.201: INFO: Created: latency-svc-n57mf
Jul  1 12:04:42.313: INFO: Created: latency-svc-g546d
Jul  1 12:04:42.313: INFO: Got endpoints: latency-svc-g546d [1.391325595s]
Jul  1 12:04:42.315: INFO: Got endpoints: latency-svc-n57mf [1.441481342s]
Jul  1 12:04:42.348: INFO: Created: latency-svc-bvlwc
Jul  1 12:04:42.359: INFO: Got endpoints: latency-svc-bvlwc [1.310484117s]
Jul  1 12:04:42.377: INFO: Created: latency-svc-s56fm
Jul  1 12:04:42.382: INFO: Got endpoints: latency-svc-s56fm [1.291531233s]
Jul  1 12:04:42.489: INFO: Created: latency-svc-2lxbl
Jul  1 12:04:42.494: INFO: Got endpoints: latency-svc-2lxbl [1.234199437s]
Jul  1 12:04:42.536: INFO: Created: latency-svc-mtkcf
Jul  1 12:04:42.542: INFO: Got endpoints: latency-svc-mtkcf [1.255988692s]
Jul  1 12:04:42.569: INFO: Created: latency-svc-zjvn2
Jul  1 12:04:42.577: INFO: Got endpoints: latency-svc-zjvn2 [1.121893691s]
Jul  1 12:04:42.670: INFO: Created: latency-svc-krwcv
Jul  1 12:04:42.673: INFO: Got endpoints: latency-svc-krwcv [1.211336723s]
Jul  1 12:04:42.712: INFO: Created: latency-svc-vdvk5
Jul  1 12:04:42.726: INFO: Got endpoints: latency-svc-vdvk5 [1.197415215s]
Jul  1 12:04:42.764: INFO: Created: latency-svc-2xxd2
Jul  1 12:04:42.911: INFO: Got endpoints: latency-svc-2xxd2 [1.184539011s]
Jul  1 12:04:42.919: INFO: Created: latency-svc-vgsbk
Jul  1 12:04:42.930: INFO: Got endpoints: latency-svc-vgsbk [1.153600172s]
Jul  1 12:04:42.976: INFO: Created: latency-svc-986gt
Jul  1 12:04:43.015: INFO: Got endpoints: latency-svc-986gt [1.073962342s]
Jul  1 12:04:43.144: INFO: Created: latency-svc-v5bsw
Jul  1 12:04:43.171: INFO: Got endpoints: latency-svc-v5bsw [1.180869627s]
Jul  1 12:04:43.175: INFO: Created: latency-svc-q4vwf
Jul  1 12:04:43.177: INFO: Got endpoints: latency-svc-q4vwf [1.072484828s]
Jul  1 12:04:43.305: INFO: Created: latency-svc-ds55q
Jul  1 12:04:43.328: INFO: Got endpoints: latency-svc-ds55q [1.155635185s]
Jul  1 12:04:43.400: INFO: Created: latency-svc-k64pd
Jul  1 12:04:43.400: INFO: Got endpoints: latency-svc-k64pd [1.087108903s]
Jul  1 12:04:43.489: INFO: Created: latency-svc-6lh52
Jul  1 12:04:43.497: INFO: Got endpoints: latency-svc-6lh52 [1.181993579s]
Jul  1 12:04:43.561: INFO: Created: latency-svc-zhvf8
Jul  1 12:04:43.573: INFO: Got endpoints: latency-svc-zhvf8 [1.213912128s]
Jul  1 12:04:43.665: INFO: Created: latency-svc-lvq46
Jul  1 12:04:43.673: INFO: Got endpoints: latency-svc-lvq46 [1.290892147s]
Jul  1 12:04:43.704: INFO: Created: latency-svc-lkkmh
Jul  1 12:04:43.724: INFO: Got endpoints: latency-svc-lkkmh [1.229804006s]
Jul  1 12:04:43.821: INFO: Created: latency-svc-89wlc
Jul  1 12:04:43.824: INFO: Got endpoints: latency-svc-89wlc [1.282157892s]
Jul  1 12:04:43.863: INFO: Created: latency-svc-pt8gn
Jul  1 12:04:43.870: INFO: Got endpoints: latency-svc-pt8gn [1.292671581s]
Jul  1 12:04:43.898: INFO: Created: latency-svc-c28cp
Jul  1 12:04:43.905: INFO: Got endpoints: latency-svc-c28cp [1.231394794s]
Jul  1 12:04:44.008: INFO: Created: latency-svc-7mm7x
Jul  1 12:04:44.015: INFO: Got endpoints: latency-svc-7mm7x [1.288648634s]
Jul  1 12:04:44.068: INFO: Created: latency-svc-f2cd5
Jul  1 12:04:44.079: INFO: Got endpoints: latency-svc-f2cd5 [1.168775041s]
Jul  1 12:04:44.184: INFO: Created: latency-svc-c5mkm
Jul  1 12:04:44.184: INFO: Got endpoints: latency-svc-c5mkm [1.254052174s]
Jul  1 12:04:44.251: INFO: Created: latency-svc-v8mg2
Jul  1 12:04:44.258: INFO: Got endpoints: latency-svc-v8mg2 [1.24346054s]
Jul  1 12:04:44.382: INFO: Created: latency-svc-wvhcr
Jul  1 12:04:44.383: INFO: Got endpoints: latency-svc-wvhcr [1.211043675s]
Jul  1 12:04:44.440: INFO: Created: latency-svc-dvdpj
Jul  1 12:04:44.440: INFO: Got endpoints: latency-svc-dvdpj [1.262856683s]
Jul  1 12:04:44.550: INFO: Created: latency-svc-f55bd
Jul  1 12:04:44.557: INFO: Got endpoints: latency-svc-f55bd [1.228795964s]
Jul  1 12:04:44.607: INFO: Created: latency-svc-mmkwz
Jul  1 12:04:44.628: INFO: Got endpoints: latency-svc-mmkwz [1.227756627s]
Jul  1 12:04:44.710: INFO: Created: latency-svc-b74f8
Jul  1 12:04:44.726: INFO: Got endpoints: latency-svc-b74f8 [1.228953792s]
Jul  1 12:04:44.768: INFO: Created: latency-svc-27fj5
Jul  1 12:04:44.768: INFO: Got endpoints: latency-svc-27fj5 [1.195462238s]
Jul  1 12:04:44.810: INFO: Created: latency-svc-h4ttl
Jul  1 12:04:44.924: INFO: Got endpoints: latency-svc-h4ttl [1.250791278s]
Jul  1 12:04:44.963: INFO: Created: latency-svc-xkh6z
Jul  1 12:04:44.964: INFO: Got endpoints: latency-svc-xkh6z [1.240411302s]
Jul  1 12:04:45.004: INFO: Created: latency-svc-2fbbr
Jul  1 12:04:45.004: INFO: Got endpoints: latency-svc-2fbbr [1.17952036s]
Jul  1 12:04:45.144: INFO: Created: latency-svc-dbmcr
Jul  1 12:04:45.155: INFO: Got endpoints: latency-svc-dbmcr [1.28490394s]
Jul  1 12:04:45.229: INFO: Created: latency-svc-8wn7r
Jul  1 12:04:45.229: INFO: Got endpoints: latency-svc-8wn7r [1.324083762s]
Jul  1 12:04:45.325: INFO: Created: latency-svc-4qtd8
Jul  1 12:04:45.328: INFO: Got endpoints: latency-svc-4qtd8 [1.313542943s]
Jul  1 12:04:45.393: INFO: Created: latency-svc-cb7tf
Jul  1 12:04:45.404: INFO: Got endpoints: latency-svc-cb7tf [1.324163219s]
Jul  1 12:04:45.507: INFO: Created: latency-svc-dnnp9
Jul  1 12:04:45.516: INFO: Got endpoints: latency-svc-dnnp9 [1.332011755s]
Jul  1 12:04:45.562: INFO: Created: latency-svc-rzm5n
Jul  1 12:04:45.567: INFO: Got endpoints: latency-svc-rzm5n [1.309164589s]
Jul  1 12:04:45.696: INFO: Created: latency-svc-ppx9m
Jul  1 12:04:45.702: INFO: Got endpoints: latency-svc-ppx9m [1.319516006s]
Jul  1 12:04:45.766: INFO: Created: latency-svc-dgmmw
Jul  1 12:04:45.774: INFO: Got endpoints: latency-svc-dgmmw [1.333530961s]
Jul  1 12:04:45.861: INFO: Created: latency-svc-prn9h
Jul  1 12:04:45.868: INFO: Got endpoints: latency-svc-prn9h [1.311129817s]
Jul  1 12:04:45.904: INFO: Created: latency-svc-kl7vk
Jul  1 12:04:45.919: INFO: Got endpoints: latency-svc-kl7vk [1.2907574s]
Jul  1 12:04:45.952: INFO: Created: latency-svc-2g7xl
Jul  1 12:04:45.954: INFO: Got endpoints: latency-svc-2g7xl [1.227999494s]
Jul  1 12:04:46.065: INFO: Created: latency-svc-zqtsk
Jul  1 12:04:46.074: INFO: Got endpoints: latency-svc-zqtsk [1.305627958s]
Jul  1 12:04:46.147: INFO: Created: latency-svc-cl6n6
Jul  1 12:04:46.152: INFO: Got endpoints: latency-svc-cl6n6 [1.227884592s]
Jul  1 12:04:46.303: INFO: Created: latency-svc-n7kcb
Jul  1 12:04:46.312: INFO: Got endpoints: latency-svc-n7kcb [1.347217908s]
Jul  1 12:04:46.357: INFO: Created: latency-svc-xcgkj
Jul  1 12:04:46.364: INFO: Got endpoints: latency-svc-xcgkj [1.359769871s]
Jul  1 12:04:46.483: INFO: Created: latency-svc-6zs87
Jul  1 12:04:46.499: INFO: Got endpoints: latency-svc-6zs87 [1.344680579s]
Jul  1 12:04:46.526: INFO: Created: latency-svc-7mwlv
Jul  1 12:04:46.532: INFO: Got endpoints: latency-svc-7mwlv [1.303074863s]
Jul  1 12:04:46.584: INFO: Created: latency-svc-vmgx6
Jul  1 12:04:46.688: INFO: Got endpoints: latency-svc-vmgx6 [1.359718862s]
Jul  1 12:04:46.732: INFO: Created: latency-svc-pj86f
Jul  1 12:04:46.732: INFO: Got endpoints: latency-svc-pj86f [1.328650755s]
Jul  1 12:04:46.762: INFO: Created: latency-svc-znsnc
Jul  1 12:04:46.887: INFO: Got endpoints: latency-svc-znsnc [1.370504485s]
Jul  1 12:04:46.900: INFO: Created: latency-svc-r9r82
Jul  1 12:04:46.903: INFO: Got endpoints: latency-svc-r9r82 [1.335677032s]
Jul  1 12:04:46.939: INFO: Created: latency-svc-g2vbn
Jul  1 12:04:46.953: INFO: Got endpoints: latency-svc-g2vbn [1.250637919s]
Jul  1 12:04:46.975: INFO: Created: latency-svc-gpnxh
Jul  1 12:04:46.983: INFO: Got endpoints: latency-svc-gpnxh [1.209085563s]
Jul  1 12:04:47.082: INFO: Created: latency-svc-wfwnx
Jul  1 12:04:47.095: INFO: Got endpoints: latency-svc-wfwnx [1.226523265s]
Jul  1 12:04:47.143: INFO: Created: latency-svc-dpvn7
Jul  1 12:04:47.161: INFO: Got endpoints: latency-svc-dpvn7 [1.242689974s]
Jul  1 12:04:47.285: INFO: Created: latency-svc-s6vgs
Jul  1 12:04:47.287: INFO: Got endpoints: latency-svc-s6vgs [1.33276752s]
Jul  1 12:04:47.316: INFO: Created: latency-svc-vwz4k
Jul  1 12:04:47.324: INFO: Got endpoints: latency-svc-vwz4k [1.249447158s]
Jul  1 12:04:47.354: INFO: Created: latency-svc-vf4dm
Jul  1 12:04:47.361: INFO: Got endpoints: latency-svc-vf4dm [1.209208672s]
Jul  1 12:04:47.494: INFO: Created: latency-svc-gjqzc
Jul  1 12:04:47.500: INFO: Got endpoints: latency-svc-gjqzc [1.188718326s]
Jul  1 12:04:47.544: INFO: Created: latency-svc-nd2nf
Jul  1 12:04:47.561: INFO: Got endpoints: latency-svc-nd2nf [1.196736663s]
Jul  1 12:04:47.585: INFO: Created: latency-svc-zwb88
Jul  1 12:04:47.589: INFO: Got endpoints: latency-svc-zwb88 [1.089831346s]
Jul  1 12:04:47.687: INFO: Created: latency-svc-7jpj4
Jul  1 12:04:47.691: INFO: Got endpoints: latency-svc-7jpj4 [1.159316814s]
Jul  1 12:04:47.718: INFO: Created: latency-svc-gff6r
Jul  1 12:04:47.726: INFO: Got endpoints: latency-svc-gff6r [1.038457191s]
Jul  1 12:04:47.772: INFO: Created: latency-svc-4t6sv
Jul  1 12:04:47.775: INFO: Got endpoints: latency-svc-4t6sv [1.042802943s]
Jul  1 12:04:47.873: INFO: Created: latency-svc-n4x6x
Jul  1 12:04:47.876: INFO: Got endpoints: latency-svc-n4x6x [988.567924ms]
Jul  1 12:04:47.929: INFO: Created: latency-svc-7x8p7
Jul  1 12:04:47.933: INFO: Got endpoints: latency-svc-7x8p7 [1.029557696s]
Jul  1 12:04:48.025: INFO: Created: latency-svc-h9j4c
Jul  1 12:04:48.032: INFO: Got endpoints: latency-svc-h9j4c [1.079210051s]
Jul  1 12:04:48.076: INFO: Created: latency-svc-wm5w6
Jul  1 12:04:48.083: INFO: Got endpoints: latency-svc-wm5w6 [1.100109639s]
Jul  1 12:04:48.235: INFO: Created: latency-svc-s4wqk
Jul  1 12:04:48.247: INFO: Got endpoints: latency-svc-s4wqk [1.152334365s]
Jul  1 12:04:48.284: INFO: Created: latency-svc-xc52z
Jul  1 12:04:48.296: INFO: Got endpoints: latency-svc-xc52z [1.134849806s]
Jul  1 12:04:48.478: INFO: Created: latency-svc-jxdcl
Jul  1 12:04:48.513: INFO: Got endpoints: latency-svc-jxdcl [1.22595579s]
Jul  1 12:04:48.558: INFO: Created: latency-svc-knp5q
Jul  1 12:04:48.560: INFO: Got endpoints: latency-svc-knp5q [1.236691567s]
Jul  1 12:04:48.693: INFO: Created: latency-svc-gfk4j
Jul  1 12:04:48.699: INFO: Got endpoints: latency-svc-gfk4j [1.338335445s]
Jul  1 12:04:48.780: INFO: Created: latency-svc-pdrw5
Jul  1 12:04:48.785: INFO: Got endpoints: latency-svc-pdrw5 [1.284534996s]
Jul  1 12:04:48.978: INFO: Created: latency-svc-mcjq2
Jul  1 12:04:48.985: INFO: Got endpoints: latency-svc-mcjq2 [1.424736065s]
Jul  1 12:04:49.071: INFO: Created: latency-svc-p9gfg
Jul  1 12:04:49.247: INFO: Got endpoints: latency-svc-p9gfg [1.657180295s]
Jul  1 12:04:49.252: INFO: Created: latency-svc-xjcgq
Jul  1 12:04:49.257: INFO: Got endpoints: latency-svc-xjcgq [1.565211015s]
Jul  1 12:04:49.342: INFO: Created: latency-svc-m89cd
Jul  1 12:04:49.454: INFO: Got endpoints: latency-svc-m89cd [1.727712022s]
Jul  1 12:04:49.514: INFO: Created: latency-svc-kh247
Jul  1 12:04:49.514: INFO: Got endpoints: latency-svc-kh247 [1.739153407s]
Jul  1 12:04:49.659: INFO: Created: latency-svc-g87f4
Jul  1 12:04:49.724: INFO: Got endpoints: latency-svc-g87f4 [1.848079924s]
Jul  1 12:04:49.726: INFO: Created: latency-svc-h9dqf
Jul  1 12:04:49.808: INFO: Got endpoints: latency-svc-h9dqf [1.875018411s]
Jul  1 12:04:49.828: INFO: Created: latency-svc-mjk8q
Jul  1 12:04:49.830: INFO: Got endpoints: latency-svc-mjk8q [1.797549815s]
Jul  1 12:04:49.857: INFO: Created: latency-svc-f7hdp
Jul  1 12:04:49.859: INFO: Got endpoints: latency-svc-f7hdp [1.775919275s]
Jul  1 12:04:49.891: INFO: Created: latency-svc-gnqgs
Jul  1 12:04:50.012: INFO: Got endpoints: latency-svc-gnqgs [1.764730136s]
Jul  1 12:04:50.018: INFO: Created: latency-svc-klj97
Jul  1 12:04:50.026: INFO: Got endpoints: latency-svc-klj97 [1.729770095s]
Jul  1 12:04:50.067: INFO: Created: latency-svc-84tmx
Jul  1 12:04:50.071: INFO: Got endpoints: latency-svc-84tmx [1.557856748s]
Jul  1 12:04:50.101: INFO: Created: latency-svc-8gmft
Jul  1 12:04:50.233: INFO: Got endpoints: latency-svc-8gmft [1.672749499s]
Jul  1 12:04:50.241: INFO: Created: latency-svc-tjn4v
Jul  1 12:04:50.247: INFO: Got endpoints: latency-svc-tjn4v [1.547098761s]
Jul  1 12:04:50.284: INFO: Created: latency-svc-bx682
Jul  1 12:04:50.293: INFO: Got endpoints: latency-svc-bx682 [1.507809925s]
Jul  1 12:04:50.327: INFO: Created: latency-svc-bv7ht
Jul  1 12:04:50.418: INFO: Got endpoints: latency-svc-bv7ht [1.432924526s]
Jul  1 12:04:50.424: INFO: Created: latency-svc-8mpdg
Jul  1 12:04:50.428: INFO: Got endpoints: latency-svc-8mpdg [1.180879129s]
Jul  1 12:04:50.468: INFO: Created: latency-svc-cxm9p
Jul  1 12:04:50.469: INFO: Got endpoints: latency-svc-cxm9p [1.212202814s]
Jul  1 12:04:50.513: INFO: Created: latency-svc-6st9s
Jul  1 12:04:50.516: INFO: Got endpoints: latency-svc-6st9s [1.061643079s]
Jul  1 12:04:50.605: INFO: Created: latency-svc-d277d
Jul  1 12:04:50.612: INFO: Got endpoints: latency-svc-d277d [1.097722727s]
Jul  1 12:04:50.641: INFO: Created: latency-svc-pfgss
Jul  1 12:04:50.643: INFO: Got endpoints: latency-svc-pfgss [919.586424ms]
Jul  1 12:04:50.684: INFO: Created: latency-svc-gfmdv
Jul  1 12:04:50.685: INFO: Got endpoints: latency-svc-gfmdv [876.845819ms]
Jul  1 12:04:50.780: INFO: Created: latency-svc-q8l25
Jul  1 12:04:50.782: INFO: Got endpoints: latency-svc-q8l25 [952.189069ms]
Jul  1 12:04:50.818: INFO: Created: latency-svc-757zp
Jul  1 12:04:50.827: INFO: Got endpoints: latency-svc-757zp [967.9596ms]
Jul  1 12:04:50.867: INFO: Created: latency-svc-27ptl
Jul  1 12:04:50.871: INFO: Got endpoints: latency-svc-27ptl [859.416718ms]
Jul  1 12:04:50.995: INFO: Created: latency-svc-xc54s
Jul  1 12:04:51.001: INFO: Got endpoints: latency-svc-xc54s [974.576517ms]
Jul  1 12:04:51.033: INFO: Created: latency-svc-lwsg8
Jul  1 12:04:51.041: INFO: Got endpoints: latency-svc-lwsg8 [969.538471ms]
Jul  1 12:04:51.206: INFO: Created: latency-svc-cdlv9
Jul  1 12:04:51.213: INFO: Got endpoints: latency-svc-cdlv9 [979.357057ms]
Jul  1 12:04:51.245: INFO: Created: latency-svc-kz9gl
Jul  1 12:04:51.251: INFO: Got endpoints: latency-svc-kz9gl [1.004413717s]
Jul  1 12:04:51.282: INFO: Created: latency-svc-sf9jg
Jul  1 12:04:51.286: INFO: Got endpoints: latency-svc-sf9jg [993.399634ms]
Jul  1 12:04:51.409: INFO: Created: latency-svc-47s8c
Jul  1 12:04:51.411: INFO: Got endpoints: latency-svc-47s8c [992.913913ms]
Jul  1 12:04:51.445: INFO: Created: latency-svc-krdpb
Jul  1 12:04:51.448: INFO: Got endpoints: latency-svc-krdpb [1.020266991s]
Jul  1 12:04:51.483: INFO: Created: latency-svc-fmvm9
Jul  1 12:04:51.492: INFO: Got endpoints: latency-svc-fmvm9 [1.022893731s]
Jul  1 12:04:51.492: INFO: Latencies: [132.329204ms 187.863435ms 347.814321ms 384.39357ms 541.723411ms 762.449827ms 783.451085ms 818.773699ms 859.416718ms 876.845819ms 919.586424ms 952.189069ms 967.9596ms 968.361858ms 969.538471ms 974.576517ms 979.357057ms 988.567924ms 992.913913ms 993.399634ms 1.004413717s 1.016630673s 1.020266991s 1.022893731s 1.029557696s 1.038457191s 1.042802943s 1.06093789s 1.061643079s 1.072484828s 1.073962342s 1.079210051s 1.087108903s 1.089831346s 1.090810597s 1.097722727s 1.100109639s 1.121879116s 1.121893691s 1.132590479s 1.134348689s 1.134849806s 1.152334365s 1.153600172s 1.155635185s 1.159047792s 1.159316814s 1.168775041s 1.169376738s 1.170109665s 1.17952036s 1.180869627s 1.180879129s 1.181993579s 1.184539011s 1.186233193s 1.188718326s 1.195462238s 1.196736663s 1.197415215s 1.197665923s 1.209085563s 1.209208672s 1.210336516s 1.211043675s 1.211336723s 1.212202814s 1.213912128s 1.220188979s 1.222542043s 1.22595579s 1.226523265s 1.22752939s 1.227756627s 1.227884592s 1.227999494s 1.228795964s 1.228953792s 1.229804006s 1.231394794s 1.234199437s 1.235016055s 1.235978331s 1.236691567s 1.240411302s 1.242689974s 1.24346054s 1.246261474s 1.247096108s 1.249447158s 1.250637919s 1.250791278s 1.254052174s 1.255988692s 1.257512766s 1.258777264s 1.260726549s 1.262856683s 1.270918154s 1.279053874s 1.282157892s 1.284534996s 1.28490394s 1.288259693s 1.288616257s 1.288648634s 1.2907574s 1.290892147s 1.291531233s 1.292671581s 1.295103073s 1.296547437s 1.29774355s 1.299023544s 1.303074863s 1.305627958s 1.309164589s 1.310484117s 1.311129817s 1.311573117s 1.313542943s 1.315569357s 1.319516006s 1.321348154s 1.321464921s 1.322805796s 1.323466928s 1.324083762s 1.324163219s 1.326345471s 1.328650755s 1.329759867s 1.332011755s 1.33276752s 1.333530961s 1.335677032s 1.338335445s 1.339655496s 1.344680579s 1.345853109s 1.347217908s 1.34724745s 1.349535939s 1.359718862s 1.359769871s 1.36523578s 1.366660188s 1.368588869s 1.370504485s 1.371272931s 1.37349259s 1.374484093s 1.377500282s 1.382878157s 1.383291266s 1.385821034s 1.386773841s 1.389937796s 1.391325595s 1.403020163s 1.403106708s 1.412406988s 1.416029136s 1.416788908s 1.418163657s 1.424736065s 1.426478788s 1.426856472s 1.42805811s 1.432924526s 1.441481342s 1.445103771s 1.447258782s 1.451135756s 1.453661038s 1.455558711s 1.461187635s 1.462117074s 1.46598125s 1.476011006s 1.481562215s 1.481781319s 1.481846737s 1.507809925s 1.529550039s 1.542008136s 1.547098761s 1.557856748s 1.565211015s 1.6121937s 1.657180295s 1.672749499s 1.727712022s 1.729770095s 1.739153407s 1.764730136s 1.775919275s 1.797549815s 1.848079924s 1.875018411s]
Jul  1 12:04:51.492: INFO: 50 %ile: 1.282157892s
Jul  1 12:04:51.492: INFO: 90 %ile: 1.481562215s
Jul  1 12:04:51.492: INFO: 99 %ile: 1.848079924s
Jul  1 12:04:51.492: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:04:51.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-5ddsn" for this suite.
Jul  1 12:05:21.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:05:21.725: INFO: namespace: e2e-tests-svc-latency-5ddsn, resource: bindings, ignored listing per whitelist
Jul  1 12:05:21.749: INFO: namespace e2e-tests-svc-latency-5ddsn deletion completed in 30.182097399s

• [SLOW TEST:50.869 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:05:21.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jul  1 12:05:21.892: INFO: Waiting up to 5m0s for pod "var-expansion-80561827-9bf8-11e9-9f49-0242ac110006" in namespace "e2e-tests-var-expansion-ljrxm" to be "success or failure"
Jul  1 12:05:21.897: INFO: Pod "var-expansion-80561827-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.554275ms
Jul  1 12:05:23.903: INFO: Pod "var-expansion-80561827-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010314973s
Jul  1 12:05:25.910: INFO: Pod "var-expansion-80561827-9bf8-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017856471s
STEP: Saw pod success
Jul  1 12:05:25.910: INFO: Pod "var-expansion-80561827-9bf8-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:05:25.917: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-80561827-9bf8-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 12:05:25.957: INFO: Waiting for pod var-expansion-80561827-9bf8-11e9-9f49-0242ac110006 to disappear
Jul  1 12:05:25.961: INFO: Pod var-expansion-80561827-9bf8-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:05:25.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-ljrxm" for this suite.
Jul  1 12:05:32.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:05:32.086: INFO: namespace: e2e-tests-var-expansion-ljrxm, resource: bindings, ignored listing per whitelist
Jul  1 12:05:32.113: INFO: namespace e2e-tests-var-expansion-ljrxm deletion completed in 6.146739799s

• [SLOW TEST:10.363 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:05:32.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  1 12:05:32.244: INFO: Waiting up to 5m0s for pod "pod-86821cd3-9bf8-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-dzz84" to be "success or failure"
Jul  1 12:05:32.248: INFO: Pod "pod-86821cd3-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0934ms
Jul  1 12:05:34.265: INFO: Pod "pod-86821cd3-9bf8-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021404922s
Jul  1 12:05:36.270: INFO: Pod "pod-86821cd3-9bf8-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025682234s
STEP: Saw pod success
Jul  1 12:05:36.270: INFO: Pod "pod-86821cd3-9bf8-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:05:36.272: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-86821cd3-9bf8-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:05:36.336: INFO: Waiting for pod pod-86821cd3-9bf8-11e9-9f49-0242ac110006 to disappear
Jul  1 12:05:36.343: INFO: Pod pod-86821cd3-9bf8-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:05:36.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dzz84" for this suite.
Jul  1 12:05:42.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:05:42.532: INFO: namespace: e2e-tests-emptydir-dzz84, resource: bindings, ignored listing per whitelist
Jul  1 12:05:42.560: INFO: namespace e2e-tests-emptydir-dzz84 deletion completed in 6.210397814s

• [SLOW TEST:10.447 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:05:42.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul  1 12:05:46.746: INFO: Pod pod-hostip-8cba51d2-9bf8-11e9-9f49-0242ac110006 has hostIP: 192.168.100.12
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:05:46.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pllwh" for this suite.
Jul  1 12:06:08.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:06:08.935: INFO: namespace: e2e-tests-pods-pllwh, resource: bindings, ignored listing per whitelist
Jul  1 12:06:09.015: INFO: namespace e2e-tests-pods-pllwh deletion completed in 22.264370582s

• [SLOW TEST:26.454 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:06:09.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-djdx6
Jul  1 12:06:13.185: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-djdx6
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:06:13.190: INFO: Initial restart count of pod liveness-http is 0
Jul  1 12:06:31.234: INFO: Restart count of pod e2e-tests-container-probe-djdx6/liveness-http is now 1 (18.043505565s elapsed)
Jul  1 12:06:51.294: INFO: Restart count of pod e2e-tests-container-probe-djdx6/liveness-http is now 2 (38.103932556s elapsed)
Jul  1 12:07:11.364: INFO: Restart count of pod e2e-tests-container-probe-djdx6/liveness-http is now 3 (58.173605439s elapsed)
Jul  1 12:07:31.507: INFO: Restart count of pod e2e-tests-container-probe-djdx6/liveness-http is now 4 (1m18.316784531s elapsed)
Jul  1 12:08:31.697: INFO: Restart count of pod e2e-tests-container-probe-djdx6/liveness-http is now 5 (2m18.506400339s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:08:31.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-djdx6" for this suite.
Jul  1 12:08:37.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:08:37.965: INFO: namespace: e2e-tests-container-probe-djdx6, resource: bindings, ignored listing per whitelist
Jul  1 12:08:38.000: INFO: namespace e2e-tests-container-probe-djdx6 deletion completed in 6.25327526s

• [SLOW TEST:148.985 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:08:38.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jul  1 12:08:38.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bgfg4'
Jul  1 12:08:38.476: INFO: stderr: ""
Jul  1 12:08:38.476: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jul  1 12:08:39.484: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:08:39.484: INFO: Found 0 / 1
Jul  1 12:08:40.481: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:08:40.481: INFO: Found 0 / 1
Jul  1 12:08:41.482: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:08:41.482: INFO: Found 1 / 1
Jul  1 12:08:41.482: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  1 12:08:41.487: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:08:41.487: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul  1 12:08:41.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4'
Jul  1 12:08:41.626: INFO: stderr: ""
Jul  1 12:08:41.626: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jul 12:08:40.805 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 12:08:40.805 # Server started, Redis version 3.2.12\n1:M 01 Jul 12:08:40.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 12:08:40.806 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul  1 12:08:41.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4 --tail=1'
Jul  1 12:08:41.753: INFO: stderr: ""
Jul  1 12:08:41.753: INFO: stdout: "1:M 01 Jul 12:08:40.806 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul  1 12:08:41.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4 --limit-bytes=1'
Jul  1 12:08:41.844: INFO: stderr: ""
Jul  1 12:08:41.844: INFO: stdout: " "
STEP: exposing timestamps
Jul  1 12:08:41.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4 --tail=1 --timestamps'
Jul  1 12:08:41.940: INFO: stderr: ""
Jul  1 12:08:41.940: INFO: stdout: "2019-07-01T12:08:40.806449026Z 1:M 01 Jul 12:08:40.806 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul  1 12:08:44.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4 --since=1s'
Jul  1 12:08:44.608: INFO: stderr: ""
Jul  1 12:08:44.608: INFO: stdout: ""
Jul  1 12:08:44.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tvr8b redis-master --namespace=e2e-tests-kubectl-bgfg4 --since=24h'
Jul  1 12:08:44.731: INFO: stderr: ""
Jul  1 12:08:44.731: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jul 12:08:40.805 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jul 12:08:40.805 # Server started, Redis version 3.2.12\n1:M 01 Jul 12:08:40.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jul 12:08:40.806 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jul  1 12:08:44.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bgfg4'
Jul  1 12:08:44.814: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:08:44.814: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul  1 12:08:44.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-bgfg4'
Jul  1 12:08:44.909: INFO: stderr: "No resources found.\n"
Jul  1 12:08:44.909: INFO: stdout: ""
Jul  1 12:08:44.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-bgfg4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 12:08:45.007: INFO: stderr: ""
Jul  1 12:08:45.007: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:08:45.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bgfg4" for this suite.
Jul  1 12:09:07.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:09:07.081: INFO: namespace: e2e-tests-kubectl-bgfg4, resource: bindings, ignored listing per whitelist
Jul  1 12:09:07.108: INFO: namespace e2e-tests-kubectl-bgfg4 deletion completed in 22.096547117s

• [SLOW TEST:29.107 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:09:07.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-06a1927f-9bf9-11e9-9f49-0242ac110006
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:09:11.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mk9jp" for this suite.
Jul  1 12:09:33.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:09:33.356: INFO: namespace: e2e-tests-configmap-mk9jp, resource: bindings, ignored listing per whitelist
Jul  1 12:09:33.414: INFO: namespace e2e-tests-configmap-mk9jp deletion completed in 22.121873658s

• [SLOW TEST:26.307 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:09:33.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  1 12:09:33.526: INFO: Waiting up to 5m0s for pod "pod-1651a4b6-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-lb59n" to be "success or failure"
Jul  1 12:09:33.530: INFO: Pod "pod-1651a4b6-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332926ms
Jul  1 12:09:35.540: INFO: Pod "pod-1651a4b6-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014102917s
Jul  1 12:09:37.545: INFO: Pod "pod-1651a4b6-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019142232s
STEP: Saw pod success
Jul  1 12:09:37.545: INFO: Pod "pod-1651a4b6-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:09:37.551: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-1651a4b6-9bf9-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:09:37.623: INFO: Waiting for pod pod-1651a4b6-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:09:37.631: INFO: Pod pod-1651a4b6-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:09:37.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lb59n" for this suite.
Jul  1 12:09:43.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:09:43.757: INFO: namespace: e2e-tests-emptydir-lb59n, resource: bindings, ignored listing per whitelist
Jul  1 12:09:43.784: INFO: namespace e2e-tests-emptydir-lb59n deletion completed in 6.148300418s

• [SLOW TEST:10.369 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:09:43.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-c7psb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-c7psb to expose endpoints map[]
Jul  1 12:09:44.025: INFO: Get endpoints failed (4.257159ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul  1 12:09:45.030: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-c7psb exposes endpoints map[] (1.008797668s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-c7psb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-c7psb to expose endpoints map[pod1:[100]]
Jul  1 12:09:48.127: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-c7psb exposes endpoints map[pod1:[100]] (3.083028153s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-c7psb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-c7psb to expose endpoints map[pod1:[100] pod2:[101]]
Jul  1 12:09:51.285: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-c7psb exposes endpoints map[pod2:[101] pod1:[100]] (3.154182845s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-c7psb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-c7psb to expose endpoints map[pod2:[101]]
Jul  1 12:09:51.316: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-c7psb exposes endpoints map[pod2:[101]] (20.791773ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-c7psb
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-c7psb to expose endpoints map[]
Jul  1 12:09:52.388: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-c7psb exposes endpoints map[] (1.065990891s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:09:52.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-c7psb" for this suite.
Jul  1 12:10:14.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:10:14.582: INFO: namespace: e2e-tests-services-c7psb, resource: bindings, ignored listing per whitelist
Jul  1 12:10:14.651: INFO: namespace e2e-tests-services-c7psb deletion completed in 22.162405696s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:30.867 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:10:14.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  1 12:10:14.778: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  1 12:10:14.786: INFO: Waiting for terminating namespaces to be deleted...
Jul  1 12:10:14.788: INFO: 
Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test
Jul  1 12:10:14.794: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 12:10:14.794: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded)
Jul  1 12:10:14.794: INFO: 	Container weave ready: true, restart count 0
Jul  1 12:10:14.794: INFO: 	Container weave-npc ready: true, restart count 0
Jul  1 12:10:14.794: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jul  1 12:10:14.794: INFO: 	Container coredns ready: true, restart count 0
Jul  1 12:10:14.794: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 12:10:14.794: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jul  1 12:10:14.794: INFO: 	Container coredns ready: true, restart count 0
Jul  1 12:10:14.794: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded)
Jul  1 12:10:14.794: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  1 12:10:14.794: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jul  1 12:10:14.794: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-315723d9-9bf9-11e9-9f49-0242ac110006 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-315723d9-9bf9-11e9-9f49-0242ac110006 off the node hunter-server-x6tdbol33slm
STEP: verifying the node doesn't have the label kubernetes.io/e2e-315723d9-9bf9-11e9-9f49-0242ac110006
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:10:22.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-9f58b" for this suite.
Jul  1 12:10:32.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:10:33.031: INFO: namespace: e2e-tests-sched-pred-9f58b, resource: bindings, ignored listing per whitelist
Jul  1 12:10:33.092: INFO: namespace e2e-tests-sched-pred-9f58b deletion completed in 10.144838284s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.440 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:10:33.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul  1 12:10:33.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:34.970: INFO: stderr: ""
Jul  1 12:10:34.970: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  1 12:10:34.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:35.126: INFO: stderr: ""
Jul  1 12:10:35.127: INFO: stdout: "update-demo-nautilus-467lz update-demo-nautilus-vdqdd "
Jul  1 12:10:35.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-467lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:35.215: INFO: stderr: ""
Jul  1 12:10:35.215: INFO: stdout: ""
Jul  1 12:10:35.215: INFO: update-demo-nautilus-467lz is created but not running
Jul  1 12:10:40.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:40.315: INFO: stderr: ""
Jul  1 12:10:40.315: INFO: stdout: "update-demo-nautilus-467lz update-demo-nautilus-vdqdd "
Jul  1 12:10:40.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-467lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:40.407: INFO: stderr: ""
Jul  1 12:10:40.407: INFO: stdout: "true"
Jul  1 12:10:40.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-467lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:40.489: INFO: stderr: ""
Jul  1 12:10:40.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:10:40.489: INFO: validating pod update-demo-nautilus-467lz
Jul  1 12:10:40.515: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:10:40.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:10:40.515: INFO: update-demo-nautilus-467lz is verified up and running
Jul  1 12:10:40.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vdqdd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:40.611: INFO: stderr: ""
Jul  1 12:10:40.611: INFO: stdout: "true"
Jul  1 12:10:40.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vdqdd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:40.701: INFO: stderr: ""
Jul  1 12:10:40.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  1 12:10:40.701: INFO: validating pod update-demo-nautilus-vdqdd
Jul  1 12:10:41.191: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  1 12:10:41.191: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  1 12:10:41.191: INFO: update-demo-nautilus-vdqdd is verified up and running
STEP: using delete to clean up resources
Jul  1 12:10:41.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:41.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  1 12:10:41.375: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  1 12:10:41.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sk8k4'
Jul  1 12:10:41.504: INFO: stderr: "No resources found.\n"
Jul  1 12:10:41.504: INFO: stdout: ""
Jul  1 12:10:41.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sk8k4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  1 12:10:41.597: INFO: stderr: ""
Jul  1 12:10:41.597: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:10:41.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sk8k4" for this suite.
Jul  1 12:11:03.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:03.739: INFO: namespace: e2e-tests-kubectl-sk8k4, resource: bindings, ignored listing per whitelist
Jul  1 12:11:03.806: INFO: namespace e2e-tests-kubectl-sk8k4 deletion completed in 22.202140989s

• [SLOW TEST:30.714 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:03.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  1 12:11:03.932: INFO: Waiting up to 5m0s for pod "pod-4c34c593-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-b5n74" to be "success or failure"
Jul  1 12:11:03.947: INFO: Pod "pod-4c34c593-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.975226ms
Jul  1 12:11:05.952: INFO: Pod "pod-4c34c593-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020005706s
Jul  1 12:11:07.957: INFO: Pod "pod-4c34c593-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025361311s
STEP: Saw pod success
Jul  1 12:11:07.957: INFO: Pod "pod-4c34c593-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:11:07.962: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-4c34c593-9bf9-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:11:08.014: INFO: Waiting for pod pod-4c34c593-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:11:08.029: INFO: Pod pod-4c34c593-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:08.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b5n74" for this suite.
Jul  1 12:11:14.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:14.144: INFO: namespace: e2e-tests-emptydir-b5n74, resource: bindings, ignored listing per whitelist
Jul  1 12:11:14.148: INFO: namespace e2e-tests-emptydir-b5n74 deletion completed in 6.114088597s

• [SLOW TEST:10.341 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:14.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-525e953d-9bf9-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 12:11:14.273: INFO: Waiting up to 5m0s for pod "pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-kwrqv" to be "success or failure"
Jul  1 12:11:14.289: INFO: Pod "pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.384456ms
Jul  1 12:11:16.292: INFO: Pod "pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019636911s
Jul  1 12:11:18.298: INFO: Pod "pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024659825s
STEP: Saw pod success
Jul  1 12:11:18.298: INFO: Pod "pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:11:18.300: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:11:18.357: INFO: Waiting for pod pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:11:18.395: INFO: Pod pod-secrets-525f2024-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:18.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kwrqv" for this suite.
Jul  1 12:11:24.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:24.457: INFO: namespace: e2e-tests-secrets-kwrqv, resource: bindings, ignored listing per whitelist
Jul  1 12:11:24.523: INFO: namespace e2e-tests-secrets-kwrqv deletion completed in 6.123307646s

• [SLOW TEST:10.375 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:24.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-588ff674-9bf9-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:11:24.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-fscxq" to be "success or failure"
Jul  1 12:11:24.713: INFO: Pod "pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 44.023555ms
Jul  1 12:11:26.717: INFO: Pod "pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048069547s
Jul  1 12:11:28.722: INFO: Pod "pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053853811s
STEP: Saw pod success
Jul  1 12:11:28.723: INFO: Pod "pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:11:28.726: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:11:28.779: INFO: Waiting for pod pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:11:28.784: INFO: Pod pod-configmaps-5890f42c-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:28.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fscxq" for this suite.
Jul  1 12:11:34.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:34.860: INFO: namespace: e2e-tests-configmap-fscxq, resource: bindings, ignored listing per whitelist
Jul  1 12:11:34.918: INFO: namespace e2e-tests-configmap-fscxq deletion completed in 6.130248716s

• [SLOW TEST:10.395 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:34.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:11:35.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-7857h" to be "success or failure"
Jul  1 12:11:35.073: INFO: Pod "downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 42.276613ms
Jul  1 12:11:37.078: INFO: Pod "downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048071568s
Jul  1 12:11:39.103: INFO: Pod "downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072442574s
STEP: Saw pod success
Jul  1 12:11:39.103: INFO: Pod "downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:11:39.107: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:11:39.149: INFO: Waiting for pod downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:11:39.162: INFO: Pod downwardapi-volume-5ebefecf-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:39.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7857h" for this suite.
Jul  1 12:11:45.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:45.225: INFO: namespace: e2e-tests-downward-api-7857h, resource: bindings, ignored listing per whitelist
Jul  1 12:11:45.348: INFO: namespace e2e-tests-downward-api-7857h deletion completed in 6.180461499s

• [SLOW TEST:10.429 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:45.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-64fa4eb0-9bf9-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:11:45.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-sj4qc" to be "success or failure"
Jul  1 12:11:45.513: INFO: Pod "pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 21.975027ms
Jul  1 12:11:47.518: INFO: Pod "pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026877555s
Jul  1 12:11:49.522: INFO: Pod "pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031087094s
STEP: Saw pod success
Jul  1 12:11:49.522: INFO: Pod "pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:11:49.525: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jul  1 12:11:49.557: INFO: Waiting for pod pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:11:49.567: INFO: Pod pod-configmaps-64fae674-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:49.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sj4qc" for this suite.
Jul  1 12:11:55.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:11:55.709: INFO: namespace: e2e-tests-configmap-sj4qc, resource: bindings, ignored listing per whitelist
Jul  1 12:11:55.749: INFO: namespace e2e-tests-configmap-sj4qc deletion completed in 6.172599145s

• [SLOW TEST:10.401 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:11:55.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:11:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5bv9g" for this suite.
Jul  1 12:12:18.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:12:18.244: INFO: namespace: e2e-tests-pods-5bv9g, resource: bindings, ignored listing per whitelist
Jul  1 12:12:18.261: INFO: namespace e2e-tests-pods-5bv9g deletion completed in 22.281303223s

• [SLOW TEST:22.512 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:12:18.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  1 12:12:18.355: INFO: Waiting up to 5m0s for pod "pod-78915b8e-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-5rstr" to be "success or failure"
Jul  1 12:12:18.381: INFO: Pod "pod-78915b8e-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 26.064348ms
Jul  1 12:12:20.385: INFO: Pod "pod-78915b8e-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029594778s
Jul  1 12:12:22.388: INFO: Pod "pod-78915b8e-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033100038s
STEP: Saw pod success
Jul  1 12:12:22.388: INFO: Pod "pod-78915b8e-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:12:22.391: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-78915b8e-9bf9-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:12:22.415: INFO: Waiting for pod pod-78915b8e-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:12:22.420: INFO: Pod pod-78915b8e-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:12:22.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5rstr" for this suite.
Jul  1 12:12:28.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:12:28.573: INFO: namespace: e2e-tests-emptydir-5rstr, resource: bindings, ignored listing per whitelist
Jul  1 12:12:28.625: INFO: namespace e2e-tests-emptydir-5rstr deletion completed in 6.20062873s

• [SLOW TEST:10.364 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:12:28.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jul  1 12:12:29.321: INFO: created pod pod-service-account-defaultsa
Jul  1 12:12:29.321: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul  1 12:12:29.330: INFO: created pod pod-service-account-mountsa
Jul  1 12:12:29.330: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul  1 12:12:29.449: INFO: created pod pod-service-account-nomountsa
Jul  1 12:12:29.449: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul  1 12:12:29.460: INFO: created pod pod-service-account-defaultsa-mountspec
Jul  1 12:12:29.460: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul  1 12:12:29.498: INFO: created pod pod-service-account-mountsa-mountspec
Jul  1 12:12:29.498: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul  1 12:12:29.516: INFO: created pod pod-service-account-nomountsa-mountspec
Jul  1 12:12:29.516: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul  1 12:12:29.535: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul  1 12:12:29.535: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul  1 12:12:29.646: INFO: created pod pod-service-account-mountsa-nomountspec
Jul  1 12:12:29.646: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul  1 12:12:29.664: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul  1 12:12:29.664: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:12:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rjkdt" for this suite.
Jul  1 12:13:17.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:13:17.985: INFO: namespace: e2e-tests-svcaccounts-rjkdt, resource: bindings, ignored listing per whitelist
Jul  1 12:13:17.999: INFO: namespace e2e-tests-svcaccounts-rjkdt deletion completed in 48.18389114s

• [SLOW TEST:49.373 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:13:17.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  1 12:13:18.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:18.236: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  1 12:13:18.236: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jul  1 12:13:18.254: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul  1 12:13:18.273: INFO: scanned /root for discovery docs: 
Jul  1 12:13:18.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:34.158: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  1 12:13:34.158: INFO: stdout: "Created e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33\nScaling up e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jul  1 12:13:34.158: INFO: stdout: "Created e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33\nScaling up e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jul  1 12:13:34.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:34.265: INFO: stderr: ""
Jul  1 12:13:34.265: INFO: stdout: "e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33-mfdlv "
Jul  1 12:13:34.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33-mfdlv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:34.359: INFO: stderr: ""
Jul  1 12:13:34.359: INFO: stdout: "true"
Jul  1 12:13:34.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33-mfdlv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:34.492: INFO: stderr: ""
Jul  1 12:13:34.492: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jul  1 12:13:34.492: INFO: e2e-test-nginx-rc-e394e3f3a310cef17eaaef74fad5dc33-mfdlv is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jul  1 12:13:34.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ghn95'
Jul  1 12:13:34.611: INFO: stderr: ""
Jul  1 12:13:34.611: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:13:34.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ghn95" for this suite.
Jul  1 12:13:56.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:13:56.669: INFO: namespace: e2e-tests-kubectl-ghn95, resource: bindings, ignored listing per whitelist
Jul  1 12:13:56.754: INFO: namespace e2e-tests-kubectl-ghn95 deletion completed in 22.137496401s

• [SLOW TEST:38.755 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:13:56.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:14:01.056: INFO: Waiting up to 5m0s for pod "client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-pods-c9qpg" to be "success or failure"
Jul  1 12:14:01.061: INFO: Pod "client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581314ms
Jul  1 12:14:03.073: INFO: Pod "client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017381085s
Jul  1 12:14:05.078: INFO: Pod "client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021921019s
STEP: Saw pod success
Jul  1 12:14:05.078: INFO: Pod "client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:14:05.081: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006 container env3cont: 
STEP: delete the pod
Jul  1 12:14:05.138: INFO: Waiting for pod client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:14:05.147: INFO: Pod client-envvars-b5be9a1c-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:14:05.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-c9qpg" for this suite.
Jul  1 12:14:45.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:14:45.268: INFO: namespace: e2e-tests-pods-c9qpg, resource: bindings, ignored listing per whitelist
Jul  1 12:14:45.352: INFO: namespace e2e-tests-pods-c9qpg deletion completed in 40.124776646s

• [SLOW TEST:48.598 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:14:45.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  1 12:14:53.666: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:14:53.671: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:14:55.671: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:14:55.693: INFO: Pod pod-with-poststart-http-hook still exists
Jul  1 12:14:57.671: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  1 12:14:57.676: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:14:57.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-98hbp" for this suite.
Jul  1 12:15:19.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:15:19.744: INFO: namespace: e2e-tests-container-lifecycle-hook-98hbp, resource: bindings, ignored listing per whitelist
Jul  1 12:15:19.788: INFO: namespace e2e-tests-container-lifecycle-hook-98hbp deletion completed in 22.105932932s

• [SLOW TEST:34.436 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:15:19.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul  1 12:15:19.878: INFO: Waiting up to 5m0s for pod "client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-containers-v5phg" to be "success or failure"
Jul  1 12:15:19.894: INFO: Pod "client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.306054ms
Jul  1 12:15:21.920: INFO: Pod "client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042021534s
Jul  1 12:15:23.924: INFO: Pod "client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045874469s
STEP: Saw pod success
Jul  1 12:15:23.924: INFO: Pod "client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:15:23.927: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:15:23.963: INFO: Waiting for pod client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:15:23.970: INFO: Pod client-containers-e4c05726-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:15:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-v5phg" for this suite.
Jul  1 12:15:30.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:15:30.066: INFO: namespace: e2e-tests-containers-v5phg, resource: bindings, ignored listing per whitelist
Jul  1 12:15:30.193: INFO: namespace e2e-tests-containers-v5phg deletion completed in 6.217733374s

• [SLOW TEST:10.405 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:15:30.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:15:30.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jul  1 12:15:30.389: INFO: stderr: ""
Jul  1 12:15:30.389: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-28T12:37:14Z\", GoVersion:\"go1.11.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jul  1 12:15:30.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kbb8s'
Jul  1 12:15:30.580: INFO: stderr: ""
Jul  1 12:15:30.580: INFO: stdout: "replicationcontroller/redis-master created\n"
Jul  1 12:15:30.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kbb8s'
Jul  1 12:15:30.840: INFO: stderr: ""
Jul  1 12:15:30.840: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  1 12:15:31.846: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:15:31.846: INFO: Found 0 / 1
Jul  1 12:15:32.852: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:15:32.852: INFO: Found 0 / 1
Jul  1 12:15:33.847: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:15:33.847: INFO: Found 1 / 1
Jul  1 12:15:33.847: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  1 12:15:33.855: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:15:33.855: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  1 12:15:33.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vb29d --namespace=e2e-tests-kubectl-kbb8s'
Jul  1 12:15:34.022: INFO: stderr: ""
Jul  1 12:15:34.022: INFO: stdout: "Name:               redis-master-vb29d\nNamespace:          e2e-tests-kubectl-kbb8s\nPriority:           0\nPriorityClassName:  \nNode:               hunter-server-x6tdbol33slm/192.168.100.12\nStart Time:         Mon, 01 Jul 2019 12:15:30 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.32.0.4\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://7a01108119f9d317392bd2354e4f6f8baaddc30bba940024857450e49ad17ca0\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 01 Jul 2019 12:15:32 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gp7rz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gp7rz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gp7rz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                 Message\n  ----    ------     ----  ----                                 -------\n  Normal  Scheduled  4s    default-scheduler                    Successfully assigned e2e-tests-kubectl-kbb8s/redis-master-vb29d to hunter-server-x6tdbol33slm\n  Normal  Pulled     2s    kubelet, hunter-server-x6tdbol33slm  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, hunter-server-x6tdbol33slm  Created container\n  Normal  Started    1s    kubelet, hunter-server-x6tdbol33slm  Started container\n"
Jul  1 12:15:34.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-kbb8s'
Jul  1 12:15:34.144: INFO: stderr: ""
Jul  1 12:15:34.144: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-kbb8s\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-vb29d\n"
Jul  1 12:15:34.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-kbb8s'
Jul  1 12:15:34.294: INFO: stderr: ""
Jul  1 12:15:34.294: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-kbb8s\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.104.37.76\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.32.0.4:6379\nSession Affinity:  None\nEvents:            \n"
Jul  1 12:15:34.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-server-x6tdbol33slm'
Jul  1 12:15:34.435: INFO: stderr: ""
Jul  1 12:15:34.435: INFO: stdout: "Name:               hunter-server-x6tdbol33slm\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-server-x6tdbol33slm\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 16 Jun 2019 12:55:20 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 16 Jun 2019 12:55:48 +0000   Sun, 16 Jun 2019 12:55:48 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 01 Jul 2019 12:15:30 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 01 Jul 2019 12:15:30 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 01 Jul 2019 12:15:30 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 01 Jul 2019 12:15:30 +0000   Sun, 16 Jun 2019 12:56:00 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  192.168.100.12\n  Hostname:    hunter-server-x6tdbol33slm\nCapacity:\n cpu:                4\n ephemeral-storage:  20263528Ki\n hugepages-2Mi:      0\n memory:             4045928Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18674867374\n hugepages-2Mi:      0\n memory:             3943528Ki\n pods:               110\nSystem Info:\n Machine ID:                 3d8dccd2e2dc43439a8a7bcb64960930\n System UUID:                3D8DCCD2-E2DC-4343-9A8A-7BCB64960930\n Boot ID:                    8456ffa0-d32c-4e2d-b5d0-8d3f937f2a85\n Kernel Version:             4.4.0-142-generic\n OS Image:                   Ubuntu 16.04.6 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.5\n Kubelet Version:            v1.13.7\n Kube-Proxy Version:         v1.13.7\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                  ------------  ----------  ---------------  -------------  ---\n  e2e-tests-kubectl-kbb8s    redis-master-vb29d                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s\n  kube-system                coredns-86c58d9df4-99n2k                              100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     14d\n  kube-system                coredns-86c58d9df4-zdm4x                              100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     14d\n  kube-system                etcd-hunter-server-x6tdbol33slm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14d\n  kube-system                kube-apiserver-hunter-server-x6tdbol33slm             250m (6%)     0 (0%)      0 (0%)           0 (0%)         14d\n  kube-system                kube-controller-manager-hunter-server-x6tdbol33slm    200m (5%)     0 (0%)      0 (0%)           0 (0%)         14d\n  kube-system                kube-proxy-ww64l                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14d\n  kube-system                kube-scheduler-hunter-server-x6tdbol33slm             100m (2%)     0 (0%)      0 (0%)           0 (0%)         14d\n  kube-system                weave-net-z4vkv                                       20m (0%)      0 (0%)      0 (0%)           0 (0%)         14d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                770m (19%)  0 (0%)\n  memory             140Mi (3%)  340Mi (8%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul  1 12:15:34.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-kbb8s'
Jul  1 12:15:34.532: INFO: stderr: ""
Jul  1 12:15:34.532: INFO: stdout: "Name:         e2e-tests-kubectl-kbb8s\nLabels:       e2e-framework=kubectl\n              e2e-run=85df4400-9bed-11e9-9f49-0242ac110006\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:15:34.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kbb8s" for this suite.
Jul  1 12:15:56.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:15:56.622: INFO: namespace: e2e-tests-kubectl-kbb8s, resource: bindings, ignored listing per whitelist
Jul  1 12:15:56.673: INFO: namespace e2e-tests-kubectl-kbb8s deletion completed in 22.137761537s

• [SLOW TEST:26.480 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:15:56.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:15:56.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-bxnjv" to be "success or failure"
Jul  1 12:15:56.872: INFO: Pod "downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.187802ms
Jul  1 12:15:58.883: INFO: Pod "downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02497309s
Jul  1 12:16:00.886: INFO: Pod "downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02798903s
STEP: Saw pod success
Jul  1 12:16:00.886: INFO: Pod "downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:16:00.888: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:16:00.921: INFO: Waiting for pod downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006 to disappear
Jul  1 12:16:00.997: INFO: Pod downwardapi-volume-facd0e2b-9bf9-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:16:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bxnjv" for this suite.
Jul  1 12:16:07.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:16:07.081: INFO: namespace: e2e-tests-projected-bxnjv, resource: bindings, ignored listing per whitelist
Jul  1 12:16:07.119: INFO: namespace e2e-tests-projected-bxnjv deletion completed in 6.119120577s

• [SLOW TEST:10.445 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:16:07.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0109b5d0-9bfa-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:16:07.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-dvrgj" to be "success or failure"
Jul  1 12:16:07.332: INFO: Pod "pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186727ms
Jul  1 12:16:09.337: INFO: Pod "pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012907704s
Jul  1 12:16:11.343: INFO: Pod "pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019174261s
STEP: Saw pod success
Jul  1 12:16:11.343: INFO: Pod "pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:16:11.347: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 12:16:11.402: INFO: Waiting for pod pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006 to disappear
Jul  1 12:16:11.405: INFO: Pod pod-projected-configmaps-010a830c-9bfa-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:16:11.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dvrgj" for this suite.
Jul  1 12:16:17.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:16:17.505: INFO: namespace: e2e-tests-projected-dvrgj, resource: bindings, ignored listing per whitelist
Jul  1 12:16:17.558: INFO: namespace e2e-tests-projected-dvrgj deletion completed in 6.145130199s

• [SLOW TEST:10.439 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:16:17.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:16:23.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-8v774" for this suite.
Jul  1 12:16:29.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:16:29.908: INFO: namespace: e2e-tests-namespaces-8v774, resource: bindings, ignored listing per whitelist
Jul  1 12:16:29.939: INFO: namespace e2e-tests-namespaces-8v774 deletion completed in 6.083141693s
STEP: Destroying namespace "e2e-tests-nsdeletetest-dz27x" for this suite.
Jul  1 12:16:29.940: INFO: Namespace e2e-tests-nsdeletetest-dz27x was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-ftvg6" for this suite.
Jul  1 12:16:35.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:16:36.040: INFO: namespace: e2e-tests-nsdeletetest-ftvg6, resource: bindings, ignored listing per whitelist
Jul  1 12:16:36.083: INFO: namespace e2e-tests-nsdeletetest-ftvg6 deletion completed in 6.14296618s

• [SLOW TEST:18.525 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:16:36.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  1 12:19:22.343: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:22.369: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:24.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:24.374: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:26.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:26.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:28.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:28.374: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:30.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:30.372: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:32.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:32.374: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:34.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:34.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:36.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:36.373: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:38.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:38.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:40.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:40.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:42.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:42.373: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:44.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:44.374: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  1 12:19:46.369: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  1 12:19:46.374: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:19:46.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dbrpx" for this suite.
Jul  1 12:20:08.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:20:08.507: INFO: namespace: e2e-tests-container-lifecycle-hook-dbrpx, resource: bindings, ignored listing per whitelist
Jul  1 12:20:08.527: INFO: namespace e2e-tests-container-lifecycle-hook-dbrpx deletion completed in 22.14715952s

• [SLOW TEST:212.444 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:20:08.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-c2646
Jul  1 12:20:12.697: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-c2646
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:20:12.701: INFO: Initial restart count of pod liveness-exec is 0
Jul  1 12:20:58.856: INFO: Restart count of pod e2e-tests-container-probe-c2646/liveness-exec is now 1 (46.154722471s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:20:58.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c2646" for this suite.
Jul  1 12:21:04.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:21:05.010: INFO: namespace: e2e-tests-container-probe-c2646, resource: bindings, ignored listing per whitelist
Jul  1 12:21:05.045: INFO: namespace e2e-tests-container-probe-c2646 deletion completed in 6.145711563s

• [SLOW TEST:56.517 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:21:05.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:21:05.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:21:09.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-tlrhz" for this suite.
Jul  1 12:22:01.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:22:01.420: INFO: namespace: e2e-tests-pods-tlrhz, resource: bindings, ignored listing per whitelist
Jul  1 12:22:01.587: INFO: namespace e2e-tests-pods-tlrhz deletion completed in 52.239304203s

• [SLOW TEST:56.542 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:22:01.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:22:01.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-s9hch" to be "success or failure"
Jul  1 12:22:01.796: INFO: Pod "downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.799158ms
Jul  1 12:22:03.806: INFO: Pod "downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02224354s
Jul  1 12:22:05.821: INFO: Pod "downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036843086s
STEP: Saw pod success
Jul  1 12:22:05.821: INFO: Pod "downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:22:05.832: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:22:05.967: INFO: Waiting for pod downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006 to disappear
Jul  1 12:22:05.971: INFO: Pod downwardapi-volume-d44a5d7d-9bfa-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:22:05.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-s9hch" for this suite.
Jul  1 12:22:12.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:22:12.089: INFO: namespace: e2e-tests-downward-api-s9hch, resource: bindings, ignored listing per whitelist
Jul  1 12:22:12.123: INFO: namespace e2e-tests-downward-api-s9hch deletion completed in 6.147918001s

• [SLOW TEST:10.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:22:12.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul  1 12:22:12.189: INFO: Waiting up to 5m0s for pod "client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006" in namespace "e2e-tests-containers-v67r5" to be "success or failure"
Jul  1 12:22:12.205: INFO: Pod "client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.704713ms
Jul  1 12:22:14.209: INFO: Pod "client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020804898s
Jul  1 12:22:16.214: INFO: Pod "client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025842328s
STEP: Saw pod success
Jul  1 12:22:16.215: INFO: Pod "client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:22:16.217: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:22:16.346: INFO: Waiting for pod client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006 to disappear
Jul  1 12:22:16.363: INFO: Pod client-containers-da85d5a1-9bfa-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:22:16.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-v67r5" for this suite.
Jul  1 12:22:22.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:22:22.462: INFO: namespace: e2e-tests-containers-v67r5, resource: bindings, ignored listing per whitelist
Jul  1 12:22:22.474: INFO: namespace e2e-tests-containers-v67r5 deletion completed in 6.097008922s

• [SLOW TEST:10.352 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:22:22.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006
Jul  1 12:22:22.554: INFO: Pod name my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006: Found 0 pods out of 1
Jul  1 12:22:27.561: INFO: Pod name my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006: Found 1 pods out of 1
Jul  1 12:22:27.561: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006" are running
Jul  1 12:22:27.565: INFO: Pod "my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006-9l598" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 12:22:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 12:22:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 12:22:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-01 12:22:22 +0000 UTC Reason: Message:}])
Jul  1 12:22:27.565: INFO: Trying to dial the pod
Jul  1 12:22:32.593: INFO: Controller my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006: Got expected result from replica 1 [my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006-9l598]: "my-hostname-basic-e0b31b5b-9bfa-11e9-9f49-0242ac110006-9l598", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:22:32.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-jxcj6" for this suite.
Jul  1 12:22:38.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:22:38.691: INFO: namespace: e2e-tests-replication-controller-jxcj6, resource: bindings, ignored listing per whitelist
Jul  1 12:22:38.742: INFO: namespace e2e-tests-replication-controller-jxcj6 deletion completed in 6.143519937s

• [SLOW TEST:16.268 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:22:38.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:22:38.911: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul  1 12:22:43.918: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  1 12:22:43.918: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul  1 12:22:45.925: INFO: Creating deployment "test-rollover-deployment"
Jul  1 12:22:46.006: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul  1 12:22:48.015: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul  1 12:22:48.025: INFO: Ensure that both replica sets have 1 created replica
Jul  1 12:22:48.032: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul  1 12:22:48.042: INFO: Updating deployment test-rollover-deployment
Jul  1 12:22:48.042: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul  1 12:22:50.107: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul  1 12:22:50.145: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul  1 12:22:50.152: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:22:50.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580568, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:22:52.162: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:22:52.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580570, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:22:54.172: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:22:54.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580570, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:22:56.158: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:22:56.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580570, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:22:58.161: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:22:58.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580570, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:23:00.158: INFO: all replica sets need to contain the pod-template-hash label
Jul  1 12:23:00.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580570, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697580566, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:23:02.159: INFO: 
Jul  1 12:23:02.159: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  1 12:23:02.166: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-xk86m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk86m/deployments/test-rollover-deployment,UID:eea155f3-9bfa-11e9-a678-fa163e0cec1d,ResourceVersion:1855593,Generation:2,CreationTimestamp:2019-07-01 12:22:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-07-01 12:22:46 +0000 UTC 2019-07-01 12:22:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-07-01 12:23:00 +0000 UTC 2019-07-01 12:22:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-6b7f9d6597" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  1 12:23:02.170: INFO: New ReplicaSet "test-rollover-deployment-6b7f9d6597" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597,GenerateName:,Namespace:e2e-tests-deployment-xk86m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk86m/replicasets/test-rollover-deployment-6b7f9d6597,UID:efe3bbd9-9bfa-11e9-a678-fa163e0cec1d,ResourceVersion:1855584,Generation:2,CreationTimestamp:2019-07-01 12:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment eea155f3-9bfa-11e9-a678-fa163e0cec1d 0xc001466667 0xc001466668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  1 12:23:02.170: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul  1 12:23:02.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-xk86m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk86m/replicasets/test-rollover-controller,UID:ea6f9588-9bfa-11e9-a678-fa163e0cec1d,ResourceVersion:1855592,Generation:2,CreationTimestamp:2019-07-01 12:22:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment eea155f3-9bfa-11e9-a678-fa163e0cec1d 0xc0014664a7 0xc0014664a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 12:23:02.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6586df867b,GenerateName:,Namespace:e2e-tests-deployment-xk86m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk86m/replicasets/test-rollover-deployment-6586df867b,UID:eeafa678-9bfa-11e9-a678-fa163e0cec1d,ResourceVersion:1855561,Generation:2,CreationTimestamp:2019-07-01 12:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment eea155f3-9bfa-11e9-a678-fa163e0cec1d 0xc001466567 0xc001466568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 12:23:02.174: INFO: Pod "test-rollover-deployment-6b7f9d6597-7xgw6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597-7xgw6,GenerateName:test-rollover-deployment-6b7f9d6597-,Namespace:e2e-tests-deployment-xk86m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk86m/pods/test-rollover-deployment-6b7f9d6597-7xgw6,UID:efedcc60-9bfa-11e9-a678-fa163e0cec1d,ResourceVersion:1855569,Generation:0,CreationTimestamp:2019-07-01 12:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-6b7f9d6597 efe3bbd9-9bfa-11e9-a678-fa163e0cec1d 0xc001974537 0xc001974538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xckj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xckj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xckj9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019745a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001974630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:22:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:22:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:22:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:22:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-07-01 12:22:48 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-07-01 12:22:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a84913e3687494988d995cfb2231fe67c98dbf5681f0d6f7f3259d2f16d20117}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:02.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xk86m" for this suite.
Jul  1 12:23:10.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:23:10.246: INFO: namespace: e2e-tests-deployment-xk86m, resource: bindings, ignored listing per whitelist
Jul  1 12:23:10.289: INFO: namespace e2e-tests-deployment-xk86m deletion completed in 8.112743678s

• [SLOW TEST:31.547 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:23:10.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fd3de97a-9bfa-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 12:23:10.468: INFO: Waiting up to 5m0s for pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-sznz8" to be "success or failure"
Jul  1 12:23:10.482: INFO: Pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.295926ms
Jul  1 12:23:12.487: INFO: Pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019114255s
Jul  1 12:23:14.492: INFO: Pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024116276s
Jul  1 12:23:16.497: INFO: Pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029850325s
STEP: Saw pod success
Jul  1 12:23:16.497: INFO: Pod "pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:23:16.501: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:23:16.552: INFO: Waiting for pod pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006 to disappear
Jul  1 12:23:16.561: INFO: Pod pod-secrets-fd3eb838-9bfa-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:16.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sznz8" for this suite.
Jul  1 12:23:22.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:23:22.651: INFO: namespace: e2e-tests-secrets-sznz8, resource: bindings, ignored listing per whitelist
Jul  1 12:23:22.702: INFO: namespace e2e-tests-secrets-sznz8 deletion completed in 6.13624521s

• [SLOW TEST:12.412 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:23:22.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  1 12:23:22.888: INFO: Waiting up to 5m0s for pod "downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-x6k2j" to be "success or failure"
Jul  1 12:23:22.891: INFO: Pod "downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073509ms
Jul  1 12:23:24.894: INFO: Pod "downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006202395s
Jul  1 12:23:26.900: INFO: Pod "downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012366685s
STEP: Saw pod success
Jul  1 12:23:26.900: INFO: Pod "downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:23:26.905: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 12:23:26.951: INFO: Waiting for pod downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:23:27.011: INFO: Pod downward-api-04a7f95a-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:27.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x6k2j" for this suite.
Jul  1 12:23:33.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:23:33.140: INFO: namespace: e2e-tests-downward-api-x6k2j, resource: bindings, ignored listing per whitelist
Jul  1 12:23:33.172: INFO: namespace e2e-tests-downward-api-x6k2j deletion completed in 6.1557465s

• [SLOW TEST:10.469 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:23:33.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:23:33.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul  1 12:23:33.551: INFO: stderr: ""
Jul  1 12:23:33.551: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-28T12:37:14Z\", GoVersion:\"go1.11.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:39:30Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:33.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qdslm" for this suite.
Jul  1 12:23:39.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:23:39.646: INFO: namespace: e2e-tests-kubectl-qdslm, resource: bindings, ignored listing per whitelist
Jul  1 12:23:39.707: INFO: namespace e2e-tests-kubectl-qdslm deletion completed in 6.151316398s

• [SLOW TEST:6.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:23:39.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0ec18304-9bfb-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume secrets
Jul  1 12:23:39.833: INFO: Waiting up to 5m0s for pod "pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-secrets-7hnw6" to be "success or failure"
Jul  1 12:23:39.864: INFO: Pod "pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 31.176338ms
Jul  1 12:23:41.868: INFO: Pod "pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035254571s
Jul  1 12:23:43.873: INFO: Pod "pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039932037s
STEP: Saw pod success
Jul  1 12:23:43.873: INFO: Pod "pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:23:43.877: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jul  1 12:23:43.956: INFO: Waiting for pod pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:23:43.961: INFO: Pod pod-secrets-0ec23fc5-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:43.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7hnw6" for this suite.
Jul  1 12:23:49.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:23:50.049: INFO: namespace: e2e-tests-secrets-7hnw6, resource: bindings, ignored listing per whitelist
Jul  1 12:23:50.062: INFO: namespace e2e-tests-secrets-7hnw6 deletion completed in 6.095170542s

• [SLOW TEST:10.354 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:23:50.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul  1 12:23:50.158: INFO: Waiting up to 5m0s for pod "var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-var-expansion-r2k4h" to be "success or failure"
Jul  1 12:23:50.176: INFO: Pod "var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.504305ms
Jul  1 12:23:52.196: INFO: Pod "var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037830778s
Jul  1 12:23:54.201: INFO: Pod "var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042907184s
STEP: Saw pod success
Jul  1 12:23:54.201: INFO: Pod "var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:23:54.204: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006 container dapi-container: 
STEP: delete the pod
Jul  1 12:23:54.240: INFO: Waiting for pod var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:23:54.243: INFO: Pod var-expansion-14ea8bcc-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:23:54.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-r2k4h" for this suite.
Jul  1 12:24:00.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:24:00.295: INFO: namespace: e2e-tests-var-expansion-r2k4h, resource: bindings, ignored listing per whitelist
Jul  1 12:24:00.352: INFO: namespace e2e-tests-var-expansion-r2k4h deletion completed in 6.104823417s

• [SLOW TEST:10.290 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:24:00.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:24:00.518: INFO: (0) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.835586ms)
Jul  1 12:24:00.521: INFO: (1) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.567757ms)
Jul  1 12:24:00.524: INFO: (2) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.126698ms)
Jul  1 12:24:00.527: INFO: (3) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.763737ms)
Jul  1 12:24:00.532: INFO: (4) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.282951ms)
Jul  1 12:24:00.535: INFO: (5) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.129559ms)
Jul  1 12:24:00.540: INFO: (6) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.184966ms)
Jul  1 12:24:00.543: INFO: (7) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.738907ms)
Jul  1 12:24:00.547: INFO: (8) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.405265ms)
Jul  1 12:24:00.551: INFO: (9) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.906897ms)
Jul  1 12:24:00.558: INFO: (10) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.48547ms)
Jul  1 12:24:00.562: INFO: (11) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.386163ms)
Jul  1 12:24:00.564: INFO: (12) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.491809ms)
Jul  1 12:24:00.567: INFO: (13) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.527119ms)
Jul  1 12:24:00.569: INFO: (14) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.371194ms)
Jul  1 12:24:00.571: INFO: (15) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.365983ms)
Jul  1 12:24:00.574: INFO: (16) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.305252ms)
Jul  1 12:24:00.577: INFO: (17) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.606201ms)
Jul  1 12:24:00.628: INFO: (18) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 50.395832ms)
Jul  1 12:24:00.632: INFO: (19) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.43873ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:24:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-spcgs" for this suite.
Jul  1 12:24:06.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:24:06.755: INFO: namespace: e2e-tests-proxy-spcgs, resource: bindings, ignored listing per whitelist
Jul  1 12:24:06.806: INFO: namespace e2e-tests-proxy-spcgs deletion completed in 6.170489481s

• [SLOW TEST:6.454 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:24:06.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:24:06.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-66bv8" to be "success or failure"
Jul  1 12:24:06.970: INFO: Pod "downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.203262ms
Jul  1 12:24:08.999: INFO: Pod "downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043557399s
Jul  1 12:24:11.004: INFO: Pod "downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048756544s
STEP: Saw pod success
Jul  1 12:24:11.004: INFO: Pod "downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:24:11.008: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:24:11.043: INFO: Waiting for pod downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:24:11.051: INFO: Pod downwardapi-volume-1ee55945-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:24:11.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-66bv8" for this suite.
Jul  1 12:24:17.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:24:17.134: INFO: namespace: e2e-tests-projected-66bv8, resource: bindings, ignored listing per whitelist
Jul  1 12:24:17.186: INFO: namespace e2e-tests-projected-66bv8 deletion completed in 6.131647267s

• [SLOW TEST:10.380 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:24:17.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul  1 12:24:17.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-grjhx'
Jul  1 12:24:19.043: INFO: stderr: ""
Jul  1 12:24:19.043: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  1 12:24:20.046: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:24:20.046: INFO: Found 0 / 1
Jul  1 12:24:21.069: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:24:21.070: INFO: Found 0 / 1
Jul  1 12:24:22.046: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:24:22.046: INFO: Found 1 / 1
Jul  1 12:24:22.046: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul  1 12:24:22.049: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:24:22.049: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  1 12:24:22.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-h89g7 --namespace=e2e-tests-kubectl-grjhx -p {"metadata":{"annotations":{"x":"y"}}}'
Jul  1 12:24:22.167: INFO: stderr: ""
Jul  1 12:24:22.167: INFO: stdout: "pod/redis-master-h89g7 patched\n"
STEP: checking annotations
Jul  1 12:24:22.185: INFO: Selector matched 1 pods for map[app:redis]
Jul  1 12:24:22.185: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:24:22.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-grjhx" for this suite.
Jul  1 12:24:44.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:24:44.322: INFO: namespace: e2e-tests-kubectl-grjhx, resource: bindings, ignored listing per whitelist
Jul  1 12:24:44.388: INFO: namespace e2e-tests-kubectl-grjhx deletion completed in 22.197753382s

• [SLOW TEST:27.201 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:24:44.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ggph8
Jul  1 12:24:48.577: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ggph8
STEP: checking the pod's current state and verifying that restartCount is present
Jul  1 12:24:48.581: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:28:49.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ggph8" for this suite.
Jul  1 12:28:55.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:28:55.993: INFO: namespace: e2e-tests-container-probe-ggph8, resource: bindings, ignored listing per whitelist
Jul  1 12:28:56.077: INFO: namespace e2e-tests-container-probe-ggph8 deletion completed in 6.142584006s

• [SLOW TEST:251.689 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:28:56.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul  1 12:28:56.163: INFO: Waiting up to 5m0s for pod "client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-containers-t2brv" to be "success or failure"
Jul  1 12:28:56.199: INFO: Pod "client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 35.738843ms
Jul  1 12:28:58.203: INFO: Pod "client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040001934s
Jul  1 12:29:00.208: INFO: Pod "client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044644031s
STEP: Saw pod success
Jul  1 12:29:00.208: INFO: Pod "client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:29:00.211: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:29:00.245: INFO: Waiting for pod client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:29:00.282: INFO: Pod client-containers-cb4f4813-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:29:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-t2brv" for this suite.
Jul  1 12:29:06.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:29:06.482: INFO: namespace: e2e-tests-containers-t2brv, resource: bindings, ignored listing per whitelist
Jul  1 12:29:06.485: INFO: namespace e2e-tests-containers-t2brv deletion completed in 6.153784911s

• [SLOW TEST:10.408 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:29:06.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d1919ac7-9bfb-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:29:06.675: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-hzvg7" to be "success or failure"
Jul  1 12:29:06.690: INFO: Pod "pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.347058ms
Jul  1 12:29:08.694: INFO: Pod "pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018723445s
Jul  1 12:29:10.700: INFO: Pod "pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02418881s
STEP: Saw pod success
Jul  1 12:29:10.700: INFO: Pod "pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:29:10.710: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 12:29:10.791: INFO: Waiting for pod pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:29:10.799: INFO: Pod pod-projected-configmaps-d1926113-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:29:10.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hzvg7" for this suite.
Jul  1 12:29:16.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:29:16.862: INFO: namespace: e2e-tests-projected-hzvg7, resource: bindings, ignored listing per whitelist
Jul  1 12:29:16.961: INFO: namespace e2e-tests-projected-hzvg7 deletion completed in 6.157618528s

• [SLOW TEST:10.476 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:29:16.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  1 12:29:17.114: INFO: Waiting up to 5m0s for pod "pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-c6kxk" to be "success or failure"
Jul  1 12:29:17.126: INFO: Pod "pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.50278ms
Jul  1 12:29:19.135: INFO: Pod "pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021786529s
Jul  1 12:29:21.139: INFO: Pod "pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024931327s
STEP: Saw pod success
Jul  1 12:29:21.139: INFO: Pod "pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:29:21.141: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:29:21.175: INFO: Waiting for pod pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:29:21.180: INFO: Pod pod-d7c2e66b-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:29:21.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c6kxk" for this suite.
Jul  1 12:29:27.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:29:27.319: INFO: namespace: e2e-tests-emptydir-c6kxk, resource: bindings, ignored listing per whitelist
Jul  1 12:29:27.332: INFO: namespace e2e-tests-emptydir-c6kxk deletion completed in 6.108965244s

• [SLOW TEST:10.371 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:29:27.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jul  1 12:29:27.412: INFO: Waiting up to 5m0s for pod "client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-containers-mxdwc" to be "success or failure"
Jul  1 12:29:27.420: INFO: Pod "client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.823259ms
Jul  1 12:29:29.425: INFO: Pod "client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012392571s
Jul  1 12:29:31.430: INFO: Pod "client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01801448s
STEP: Saw pod success
Jul  1 12:29:31.430: INFO: Pod "client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:29:31.434: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:29:31.519: INFO: Waiting for pod client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:29:31.530: INFO: Pod client-containers-ddedaeae-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:29:31.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-mxdwc" for this suite.
Jul  1 12:29:37.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:29:37.690: INFO: namespace: e2e-tests-containers-mxdwc, resource: bindings, ignored listing per whitelist
Jul  1 12:29:37.701: INFO: namespace e2e-tests-containers-mxdwc deletion completed in 6.163397683s

• [SLOW TEST:10.368 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:29:37.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-s97mh/configmap-test-e429ad3e-9bfb-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:29:37.867: INFO: Waiting up to 5m0s for pod "pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006" in namespace "e2e-tests-configmap-s97mh" to be "success or failure"
Jul  1 12:29:37.926: INFO: Pod "pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 58.394125ms
Jul  1 12:29:39.930: INFO: Pod "pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062278429s
Jul  1 12:29:41.934: INFO: Pod "pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066855964s
STEP: Saw pod success
Jul  1 12:29:41.934: INFO: Pod "pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:29:41.940: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006 container env-test: 
STEP: delete the pod
Jul  1 12:29:42.088: INFO: Waiting for pod pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006 to disappear
Jul  1 12:29:42.092: INFO: Pod pod-configmaps-e42a3666-9bfb-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:29:42.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s97mh" for this suite.
Jul  1 12:29:48.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:29:48.217: INFO: namespace: e2e-tests-configmap-s97mh, resource: bindings, ignored listing per whitelist
Jul  1 12:29:48.218: INFO: namespace e2e-tests-configmap-s97mh deletion completed in 6.122977076s

• [SLOW TEST:10.517 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:29:48.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:29:48.389: INFO: Creating deployment "nginx-deployment"
Jul  1 12:29:48.396: INFO: Waiting for observed generation 1
Jul  1 12:29:50.422: INFO: Waiting for all required pods to come up
Jul  1 12:29:50.427: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  1 12:30:02.440: INFO: Waiting for deployment "nginx-deployment" to complete
Jul  1 12:30:02.451: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul  1 12:30:02.465: INFO: Updating deployment nginx-deployment
Jul  1 12:30:02.465: INFO: Waiting for observed generation 2
Jul  1 12:30:04.477: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  1 12:30:04.480: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  1 12:30:04.483: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  1 12:30:04.495: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  1 12:30:04.495: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  1 12:30:04.500: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  1 12:30:04.506: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul  1 12:30:04.506: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul  1 12:30:04.519: INFO: Updating deployment nginx-deployment
Jul  1 12:30:04.519: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul  1 12:30:04.658: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  1 12:30:04.732: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  1 12:30:05.405: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wcfkv/deployments/nginx-deployment,UID:ea7031dd-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856640,Generation:3,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-07-01 12:30:03 +0000 UTC 2019-07-01 12:29:48 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-65bbdb5f8" is progressing.} {Available False 2019-07-01 12:30:04 +0000 UTC 2019-07-01 12:30:04 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul  1 12:30:05.426: INFO: New ReplicaSet "nginx-deployment-65bbdb5f8" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8,GenerateName:,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wcfkv/replicasets/nginx-deployment-65bbdb5f8,UID:f2d3472f-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856680,Generation:3,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ea7031dd-9bfb-11e9-a678-fa163e0cec1d 0xc0026e3d27 0xc0026e3d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 12:30:05.426: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul  1 12:30:05.426: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965,GenerateName:,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wcfkv/replicasets/nginx-deployment-555b55d965,UID:ea74fe13-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856678,Generation:3,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ea7031dd-9bfb-11e9-a678-fa163e0cec1d 0xc0026e3c37 0xc0026e3c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul  1 12:30:05.619: INFO: Pod "nginx-deployment-555b55d965-2x2wh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-2x2wh,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-2x2wh,UID:ea83495e-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856541,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026da897 0xc0026da898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026da900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026da920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.4,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://201174829bfd226a67c54a5404ea3621921e9f9f1123414fb8adf8b75adf1cf1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.619: INFO: Pod "nginx-deployment-555b55d965-5zht4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-5zht4,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-5zht4,UID:f42ed8a1-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856659,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026da9e7 0xc0026da9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026daa50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026daa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.619: INFO: Pod "nginx-deployment-555b55d965-7r44v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-7r44v,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-7r44v,UID:f423dd4b-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856649,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026daae7 0xc0026daae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dab50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dab70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.619: INFO: Pod "nginx-deployment-555b55d965-82pgg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-82pgg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-82pgg,UID:f4586423-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856668,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dabe7 0xc0026dabe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dac50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dac70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.619: INFO: Pod "nginx-deployment-555b55d965-9bjcd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-9bjcd,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-9bjcd,UID:f4596ba8-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856681,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dace7 0xc0026dace8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dad50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dad70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.620: INFO: Pod "nginx-deployment-555b55d965-9c2cc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-9c2cc,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-9c2cc,UID:f423422a-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856644,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dade7 0xc0026dade8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dae50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dae70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.620: INFO: Pod "nginx-deployment-555b55d965-9rtsm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-9rtsm,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-9rtsm,UID:ea858771-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856571,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026daee7 0xc0026daee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026daf50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026daf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ba456a281b792a006da34849c33adf14451fd130e36f430e68247886618c2375}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.620: INFO: Pod "nginx-deployment-555b55d965-bpjhj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-bpjhj,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-bpjhj,UID:f45a5bc8-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856679,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db037 0xc0026db038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db0a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.620: INFO: Pod "nginx-deployment-555b55d965-d286n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-d286n,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-d286n,UID:f42eb32d-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856652,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db137 0xc0026db138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db1a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.620: INFO: Pod "nginx-deployment-555b55d965-d5bnm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-d5bnm,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-d5bnm,UID:f4593bfb-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856674,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db237 0xc0026db238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db2a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-f54xg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-f54xg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-f54xg,UID:ea8b835d-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856552,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db337 0xc0026db338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db3a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.11,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://49da7d3f9800686704d1b6e7c06e68ac0e7efaed507cea0ce9f440cfd71e05e2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-g5lqk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-g5lqk,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-g5lqk,UID:ea8ba646-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856556,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db487 0xc0026db488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.10,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ddf1c85dd304f9757538afa741d8a4eabe2ccb76d6d2d31c230b40142627271a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-h4x2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-h4x2m,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-h4x2m,UID:f42ef475-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856664,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db5e7 0xc0026db5e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-ljtmz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-ljtmz,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-ljtmz,UID:f42f0400-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856655,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db6e7 0xc0026db6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-m99kq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-m99kq,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-m99kq,UID:ea80b942-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856527,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db7e7 0xc0026db7e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.6,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://84153ebc75a3613aa50bd420ac07f7071873c67a988a45c6ab88ff84f703bff6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.621: INFO: Pod "nginx-deployment-555b55d965-mb9xr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-mb9xr,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-mb9xr,UID:ea85bcca-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856544,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026db937 0xc0026db938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026db9a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026db9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.9,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6701ab6b422f7f9db721c2700fddd2593678d650f9c43503d9d8e86ec24666a6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.622: INFO: Pod "nginx-deployment-555b55d965-mqlz2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-mqlz2,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-mqlz2,UID:ea85f8d3-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856547,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dba87 0xc0026dba88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dbaf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dbb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.8,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b04e422e4c0998697ff6aee6e0e3cc1f93e1077348e56e5063ccd38303e56446}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.622: INFO: Pod "nginx-deployment-555b55d965-tfnzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-tfnzg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-tfnzg,UID:f459ad38-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856675,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dbbd7 0xc0026dbbd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dbc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dbc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.622: INFO: Pod "nginx-deployment-555b55d965-vdbmz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-vdbmz,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-vdbmz,UID:ea8b3893-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856558,Generation:0,CreationTimestamp:2019-07-01 12:29:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dbcd7 0xc0026dbcd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dbd40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dbd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:29:48 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.12,StartTime:2019-07-01 12:29:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-07-01 12:29:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9f218eb468ad393a3de527150f4f25ad53e4006d1782b898177d4d93d606b2dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.622: INFO: Pod "nginx-deployment-555b55d965-x7mqr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-x7mqr,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-555b55d965-x7mqr,UID:f40ee874-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856632,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 ea74fe13-9bfb-11e9-a678-fa163e0cec1d 0xc0026dbe27 0xc0026dbe28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dbe90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dbeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.622: INFO: Pod "nginx-deployment-65bbdb5f8-49sch" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-49sch,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-49sch,UID:f459fd84-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856673,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0026dbf27 0xc0026dbf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dbf90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dbfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.623: INFO: Pod "nginx-deployment-65bbdb5f8-4bp7v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-4bp7v,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-4bp7v,UID:f2d47eae-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856625,Generation:0,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362027 0xc002362028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023620b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-07-01 12:30:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.623: INFO: Pod "nginx-deployment-65bbdb5f8-5gbqm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-5gbqm,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-5gbqm,UID:f423c14d-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856643,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362177 0xc002362178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023621e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.623: INFO: Pod "nginx-deployment-65bbdb5f8-cgqz6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-cgqz6,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-cgqz6,UID:f2d6376c-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856586,Generation:0,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362277 0xc002362278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023622e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.623: INFO: Pod "nginx-deployment-65bbdb5f8-jb4d9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-jb4d9,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-jb4d9,UID:f459d65e-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856672,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023623d7 0xc0023623d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.623: INFO: Pod "nginx-deployment-65bbdb5f8-kvrrz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-kvrrz,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-kvrrz,UID:f45a00e1-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856676,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023624d7 0xc0023624d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-nsh54" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-nsh54,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-nsh54,UID:f42f0901-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856660,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023625d7 0xc0023625d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-p5m75" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-p5m75,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-p5m75,UID:f308c6a3-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856605,Generation:0,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023626d7 0xc0023626d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-pt5h7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-pt5h7,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-pt5h7,UID:f30df892-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856610,Generation:0,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023627d7 0xc0023627d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-q6lnd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-q6lnd,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-q6lnd,UID:f2d5fd1f-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856671,Generation:0,CreationTimestamp:2019-07-01 12:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc0023628d7 0xc0023628d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:02 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-07-01 12:30:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-rt6np" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-rt6np,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-rt6np,UID:f459f095-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856677,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362a27 0xc002362a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-v2bzl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-v2bzl,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-v2bzl,UID:f42eef41-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856653,Generation:0,CreationTimestamp:2019-07-01 12:30:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362b27 0xc002362b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  1 12:30:05.624: INFO: Pod "nginx-deployment-65bbdb5f8-v2j2s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-v2j2s,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-wcfkv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wcfkv/pods/nginx-deployment-65bbdb5f8-v2j2s,UID:f4638446-9bfb-11e9-a678-fa163e0cec1d,ResourceVersion:1856682,Generation:0,CreationTimestamp:2019-07-01 12:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 f2d3472f-9bfb-11e9-a678-fa163e0cec1d 0xc002362c27 0xc002362c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ldxkg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ldxkg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ldxkg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002362c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002362cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:30:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:30:05.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wcfkv" for this suite.
Jul  1 12:30:25.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:30:25.980: INFO: namespace: e2e-tests-deployment-wcfkv, resource: bindings, ignored listing per whitelist
Jul  1 12:30:26.047: INFO: namespace e2e-tests-deployment-wcfkv deletion completed in 20.396455791s

• [SLOW TEST:37.828 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:30:26.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jul  1 12:30:27.060: INFO: Waiting up to 5m0s for pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l" in namespace "e2e-tests-svcaccounts-n864r" to be "success or failure"
Jul  1 12:30:27.151: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l": Phase="Pending", Reason="", readiness=false. Elapsed: 90.968161ms
Jul  1 12:30:30.043: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983238428s
Jul  1 12:30:32.047: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.987120404s
Jul  1 12:30:34.051: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.990916235s
Jul  1 12:30:36.055: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.995440257s
STEP: Saw pod success
Jul  1 12:30:36.055: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l" satisfied condition "success or failure"
Jul  1 12:30:36.059: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l container token-test: 
STEP: delete the pod
Jul  1 12:30:36.116: INFO: Waiting for pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l to disappear
Jul  1 12:30:36.123: INFO: Pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-99t4l no longer exists
STEP: Creating a pod to test consume service account root CA
Jul  1 12:30:36.129: INFO: Waiting up to 5m0s for pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr" in namespace "e2e-tests-svcaccounts-n864r" to be "success or failure"
Jul  1 12:30:36.136: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752011ms
Jul  1 12:30:38.139: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00984569s
Jul  1 12:30:40.142: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0126343s
Jul  1 12:30:42.146: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016867802s
STEP: Saw pod success
Jul  1 12:30:42.146: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr" satisfied condition "success or failure"
Jul  1 12:30:42.149: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr container root-ca-test: 
STEP: delete the pod
Jul  1 12:30:42.187: INFO: Waiting for pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr to disappear
Jul  1 12:30:42.195: INFO: Pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-977nr no longer exists
STEP: Creating a pod to test consume service account namespace
Jul  1 12:30:42.200: INFO: Waiting up to 5m0s for pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4" in namespace "e2e-tests-svcaccounts-n864r" to be "success or failure"
Jul  1 12:30:42.270: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4": Phase="Pending", Reason="", readiness=false. Elapsed: 69.703819ms
Jul  1 12:30:44.279: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078740974s
Jul  1 12:30:46.351: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150487348s
STEP: Saw pod success
Jul  1 12:30:46.351: INFO: Pod "pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4" satisfied condition "success or failure"
Jul  1 12:30:46.354: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4 container namespace-test: 
STEP: delete the pod
Jul  1 12:30:46.434: INFO: Waiting for pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4 to disappear
Jul  1 12:30:46.498: INFO: Pod pod-service-account-017cf854-9bfc-11e9-9f49-0242ac110006-d2nm4 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:30:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-n864r" for this suite.
Jul  1 12:30:52.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:30:52.622: INFO: namespace: e2e-tests-svcaccounts-n864r, resource: bindings, ignored listing per whitelist
Jul  1 12:30:52.669: INFO: namespace e2e-tests-svcaccounts-n864r deletion completed in 6.164712632s

• [SLOW TEST:26.623 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:30:52.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-10d81926-9bfc-11e9-9f49-0242ac110006
STEP: Creating a pod to test consume configMaps
Jul  1 12:30:52.847: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-xcdt8" to be "success or failure"
Jul  1 12:30:52.858: INFO: Pod "pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.566867ms
Jul  1 12:30:54.874: INFO: Pod "pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027486128s
Jul  1 12:30:56.879: INFO: Pod "pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032462402s
STEP: Saw pod success
Jul  1 12:30:56.880: INFO: Pod "pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:30:56.884: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  1 12:30:56.932: INFO: Waiting for pod pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006 to disappear
Jul  1 12:30:56.943: INFO: Pod pod-projected-configmaps-10d90ed9-9bfc-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:30:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xcdt8" for this suite.
Jul  1 12:31:02.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:31:03.067: INFO: namespace: e2e-tests-projected-xcdt8, resource: bindings, ignored listing per whitelist
Jul  1 12:31:03.118: INFO: namespace e2e-tests-projected-xcdt8 deletion completed in 6.143123479s

• [SLOW TEST:10.448 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:31:03.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul  1 12:31:07.263: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-170735af-9bfc-11e9-9f49-0242ac110006,GenerateName:,Namespace:e2e-tests-events-7b7vj,SelfLink:/api/v1/namespaces/e2e-tests-events-7b7vj/pods/send-events-170735af-9bfc-11e9-9f49-0242ac110006,UID:170d07f1-9bfc-11e9-a678-fa163e0cec1d,ResourceVersion:1857061,Generation:0,CreationTimestamp:2019-07-01 12:31:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 192320006,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-85jsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-85jsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-85jsd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022bb630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022bb710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:31:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:31:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:31:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:31:03 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.4,StartTime:2019-07-01 12:31:03 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-07-01 12:31:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://4c974ac8c02c962cd85e2238c484cf03cd908c72ef841a3bf3daceec683c5110}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jul  1 12:31:09.271: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul  1 12:31:11.280: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:31:11.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-7b7vj" for this suite.
Jul  1 12:31:49.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:31:49.428: INFO: namespace: e2e-tests-events-7b7vj, resource: bindings, ignored listing per whitelist
Jul  1 12:31:49.444: INFO: namespace e2e-tests-events-7b7vj deletion completed in 38.126789167s

• [SLOW TEST:46.326 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:31:49.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-32a98483-9bfc-11e9-9f49-0242ac110006
STEP: Creating secret with name s-test-opt-upd-32a984d3-9bfc-11e9-9f49-0242ac110006
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-32a98483-9bfc-11e9-9f49-0242ac110006
STEP: Updating secret s-test-opt-upd-32a984d3-9bfc-11e9-9f49-0242ac110006
STEP: Creating secret with name s-test-opt-create-32a984f1-9bfc-11e9-9f49-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:33:08.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x7pq7" for this suite.
Jul  1 12:33:30.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:33:30.759: INFO: namespace: e2e-tests-secrets-x7pq7, resource: bindings, ignored listing per whitelist
Jul  1 12:33:30.766: INFO: namespace e2e-tests-secrets-x7pq7 deletion completed in 22.074868597s

• [SLOW TEST:101.321 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:33:30.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:33:30.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006" in namespace "e2e-tests-projected-cqzq9" to be "success or failure"
Jul  1 12:33:30.885: INFO: Pod "downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365506ms
Jul  1 12:33:32.891: INFO: Pod "downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010462962s
Jul  1 12:33:34.896: INFO: Pod "downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015142855s
STEP: Saw pod success
Jul  1 12:33:34.896: INFO: Pod "downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:33:34.900: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:33:34.956: INFO: Waiting for pod downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006 to disappear
Jul  1 12:33:34.962: INFO: Pod downwardapi-volume-6f0dccc9-9bfc-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:33:34.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cqzq9" for this suite.
Jul  1 12:33:40.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:33:41.063: INFO: namespace: e2e-tests-projected-cqzq9, resource: bindings, ignored listing per whitelist
Jul  1 12:33:41.075: INFO: namespace e2e-tests-projected-cqzq9 deletion completed in 6.106552833s

• [SLOW TEST:10.310 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:33:41.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  1 12:33:41.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4g8k2'
Jul  1 12:33:41.392: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  1 12:33:41.392: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul  1 12:33:41.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-4g8k2'
Jul  1 12:33:41.556: INFO: stderr: ""
Jul  1 12:33:41.556: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:33:41.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4g8k2" for this suite.
Jul  1 12:34:03.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:34:03.768: INFO: namespace: e2e-tests-kubectl-4g8k2, resource: bindings, ignored listing per whitelist
Jul  1 12:34:03.827: INFO: namespace e2e-tests-kubectl-4g8k2 deletion completed in 22.266343522s

• [SLOW TEST:22.751 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:34:03.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:34:04.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qnjqc" for this suite.
Jul  1 12:34:10.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:34:10.137: INFO: namespace: e2e-tests-kubelet-test-qnjqc, resource: bindings, ignored listing per whitelist
Jul  1 12:34:10.150: INFO: namespace e2e-tests-kubelet-test-qnjqc deletion completed in 6.113406354s

• [SLOW TEST:6.322 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:34:10.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  1 12:34:10.306: INFO: Waiting up to 5m0s for pod "pod-868c7802-9bfc-11e9-9f49-0242ac110006" in namespace "e2e-tests-emptydir-k468v" to be "success or failure"
Jul  1 12:34:10.310: INFO: Pod "pod-868c7802-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.480381ms
Jul  1 12:34:12.356: INFO: Pod "pod-868c7802-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049289119s
Jul  1 12:34:14.361: INFO: Pod "pod-868c7802-9bfc-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054451249s
STEP: Saw pod success
Jul  1 12:34:14.361: INFO: Pod "pod-868c7802-9bfc-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:34:14.364: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-868c7802-9bfc-11e9-9f49-0242ac110006 container test-container: 
STEP: delete the pod
Jul  1 12:34:14.406: INFO: Waiting for pod pod-868c7802-9bfc-11e9-9f49-0242ac110006 to disappear
Jul  1 12:34:14.410: INFO: Pod pod-868c7802-9bfc-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:34:14.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k468v" for this suite.
Jul  1 12:34:20.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:34:20.502: INFO: namespace: e2e-tests-emptydir-k468v, resource: bindings, ignored listing per whitelist
Jul  1 12:34:20.509: INFO: namespace e2e-tests-emptydir-k468v deletion completed in 6.095262193s

• [SLOW TEST:10.359 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:34:20.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  1 12:34:20.680: INFO: Creating deployment "test-recreate-deployment"
Jul  1 12:34:20.700: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  1 12:34:20.754: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jul  1 12:34:22.763: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  1 12:34:22.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697581260, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697581260, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697581260, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697581260, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5dfdcc846d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  1 12:34:24.773: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  1 12:34:24.788: INFO: Updating deployment test-recreate-deployment
Jul  1 12:34:24.788: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  1 12:34:25.266: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-j26lq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j26lq/deployments/test-recreate-deployment,UID:8cbc5e24-9bfc-11e9-a678-fa163e0cec1d,ResourceVersion:1857488,Generation:2,CreationTimestamp:2019-07-01 12:34:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-07-01 12:34:25 +0000 UTC 2019-07-01 12:34:25 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-07-01 12:34:25 +0000 UTC 2019-07-01 12:34:20 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-697fbf54bf" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul  1 12:34:25.272: INFO: New ReplicaSet "test-recreate-deployment-697fbf54bf" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf,GenerateName:,Namespace:e2e-tests-deployment-j26lq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j26lq/replicasets/test-recreate-deployment-697fbf54bf,UID:8f44460e-9bfc-11e9-a678-fa163e0cec1d,ResourceVersion:1857483,Generation:1,CreationTimestamp:2019-07-01 12:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8cbc5e24-9bfc-11e9-a678-fa163e0cec1d 0xc001cc9c87 0xc001cc9c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 12:34:25.272: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  1 12:34:25.272: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5dfdcc846d,GenerateName:,Namespace:e2e-tests-deployment-j26lq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j26lq/replicasets/test-recreate-deployment-5dfdcc846d,UID:8cc78b2e-9bfc-11e9-a678-fa163e0cec1d,ResourceVersion:1857476,Generation:2,CreationTimestamp:2019-07-01 12:34:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8cbc5e24-9bfc-11e9-a678-fa163e0cec1d 0xc001cc9bc7 0xc001cc9bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  1 12:34:25.288: INFO: Pod "test-recreate-deployment-697fbf54bf-gj5fr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf-gj5fr,GenerateName:test-recreate-deployment-697fbf54bf-,Namespace:e2e-tests-deployment-j26lq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-j26lq/pods/test-recreate-deployment-697fbf54bf-gj5fr,UID:8f48d282-9bfc-11e9-a678-fa163e0cec1d,ResourceVersion:1857487,Generation:0,CreationTimestamp:2019-07-01 12:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-697fbf54bf 8f44460e-9bfc-11e9-a678-fa163e0cec1d 0xc001ce4ab7 0xc001ce4ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rrlbr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rrlbr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rrlbr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ce4b20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ce4b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:34:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-07-01 12:34:25 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-07-01 12:34:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:34:25.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-j26lq" for this suite.
Jul  1 12:34:31.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:34:31.326: INFO: namespace: e2e-tests-deployment-j26lq, resource: bindings, ignored listing per whitelist
Jul  1 12:34:31.403: INFO: namespace e2e-tests-deployment-j26lq deletion completed in 6.110442682s

• [SLOW TEST:10.893 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:34:31.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-rdqk
STEP: Creating a pod to test atomic-volume-subpath
Jul  1 12:34:31.987: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rdqk" in namespace "e2e-tests-subpath-q2vrn" to be "success or failure"
Jul  1 12:34:32.001: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.320818ms
Jul  1 12:34:34.010: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023076498s
Jul  1 12:34:36.016: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028487737s
Jul  1 12:34:38.027: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 6.040141315s
Jul  1 12:34:40.033: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 8.045703347s
Jul  1 12:34:42.037: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 10.049196313s
Jul  1 12:34:44.040: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 12.05297697s
Jul  1 12:34:46.047: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 14.059304479s
Jul  1 12:34:48.052: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 16.065024638s
Jul  1 12:34:50.056: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 18.06833163s
Jul  1 12:34:52.061: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 20.073209305s
Jul  1 12:34:54.066: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 22.079146234s
Jul  1 12:34:56.075: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Running", Reason="", readiness=false. Elapsed: 24.087843658s
Jul  1 12:34:58.081: INFO: Pod "pod-subpath-test-downwardapi-rdqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.093712057s
STEP: Saw pod success
Jul  1 12:34:58.081: INFO: Pod "pod-subpath-test-downwardapi-rdqk" satisfied condition "success or failure"
Jul  1 12:34:58.085: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-downwardapi-rdqk container test-container-subpath-downwardapi-rdqk: 
STEP: delete the pod
Jul  1 12:34:58.178: INFO: Waiting for pod pod-subpath-test-downwardapi-rdqk to disappear
Jul  1 12:34:58.187: INFO: Pod pod-subpath-test-downwardapi-rdqk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rdqk
Jul  1 12:34:58.187: INFO: Deleting pod "pod-subpath-test-downwardapi-rdqk" in namespace "e2e-tests-subpath-q2vrn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:34:58.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-q2vrn" for this suite.
Jul  1 12:35:04.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:35:04.321: INFO: namespace: e2e-tests-subpath-q2vrn, resource: bindings, ignored listing per whitelist
Jul  1 12:35:04.382: INFO: namespace e2e-tests-subpath-q2vrn deletion completed in 6.188759874s

• [SLOW TEST:32.979 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  1 12:35:04.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  1 12:35:04.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006" in namespace "e2e-tests-downward-api-ffkrp" to be "success or failure"
Jul  1 12:35:04.519: INFO: Pod "downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197452ms
Jul  1 12:35:07.135: INFO: Pod "downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.622251682s
Jul  1 12:35:09.140: INFO: Pod "downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.627811008s
STEP: Saw pod success
Jul  1 12:35:09.141: INFO: Pod "downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006" satisfied condition "success or failure"
Jul  1 12:35:09.144: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006 container client-container: 
STEP: delete the pod
Jul  1 12:35:09.194: INFO: Waiting for pod downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006 to disappear
Jul  1 12:35:09.198: INFO: Pod downwardapi-volume-a6d68ca8-9bfc-11e9-9f49-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  1 12:35:09.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ffkrp" for this suite.
Jul  1 12:35:15.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  1 12:35:15.437: INFO: namespace: e2e-tests-downward-api-ffkrp, resource: bindings, ignored listing per whitelist
Jul  1 12:35:15.465: INFO: namespace e2e-tests-downward-api-ffkrp deletion completed in 6.261980779s

• [SLOW TEST:11.083 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSJul  1 12:35:15.465: INFO: Running AfterSuite actions on all nodes
Jul  1 12:35:15.465: INFO: Running AfterSuite actions on node 1
Jul  1 12:35:15.465: INFO: Skipping dumping logs from cluster

Ran 200 of 2162 Specs in 6508.157 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1962 Skipped PASS