I0104 11:30:20.319353 8 e2e.go:224] Starting e2e run "96679f59-2ee5-11ea-9996-0242ac110006" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578137419 - Will randomize all specs Will run 201 of 2164 specs Jan 4 11:30:20.903: INFO: >>> kubeConfig: /root/.kube/config Jan 4 11:30:20.919: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 11:30:20.946: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 11:30:21.570: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 11:30:21.570: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 11:30:21.570: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 11:30:21.587: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 11:30:21.587: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 11:30:21.587: INFO: e2e test version: v1.13.12 Jan 4 11:30:21.592: INFO: kube-apiserver version: v1.13.8 SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:30:21.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Jan 4 11:30:22.092: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0104 11:31:03.496068 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 11:31:03.496: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:31:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-86g9t" for this suite. Jan 4 11:31:12.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:31:14.746: INFO: namespace: e2e-tests-gc-86g9t, resource: bindings, ignored listing per whitelist Jan 4 11:31:14.746: INFO: namespace e2e-tests-gc-86g9t deletion completed in 11.240483494s • [SLOW TEST:53.153 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:31:14.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b7dc4ce9-2ee5-11ea-9996-0242ac110006 STEP: Creating a pod to test consume secrets Jan 4 11:31:15.612: INFO: Waiting up to 5m0s for pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-z59jd" to be "success or failure" Jan 4 11:31:15.735: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 123.081629ms Jan 4 11:31:18.012: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399945033s Jan 4 11:31:20.087: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475506861s Jan 4 11:31:22.103: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490764135s Jan 4 11:31:24.116: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50389825s Jan 4 11:31:26.134: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.522133629s Jan 4 11:31:28.156: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.54433298s Jan 4 11:31:30.784: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172383889s Jan 4 11:31:32.811: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.19913553s Jan 4 11:31:34.840: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.227744736s Jan 4 11:31:36.861: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.249654617s STEP: Saw pod success Jan 4 11:31:36.862: INFO: Pod "pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:31:36.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006 container secret-volume-test: STEP: delete the pod Jan 4 11:31:36.973: INFO: Waiting for pod pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006 to disappear Jan 4 11:31:36.985: INFO: Pod pod-secrets-b7dd77fb-2ee5-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:31:36.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-z59jd" for this suite. Jan 4 11:31:43.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:31:43.236: INFO: namespace: e2e-tests-secrets-z59jd, resource: bindings, ignored listing per whitelist Jan 4 11:31:43.372: INFO: namespace e2e-tests-secrets-z59jd deletion completed in 6.37792343s • [SLOW TEST:28.626 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:31:43.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 11:31:43.535: INFO: Waiting up to 5m0s for pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-2nsjc" to be "success or failure" Jan 4 11:31:43.541: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250043ms Jan 4 11:31:45.709: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174404744s Jan 4 11:31:47.735: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200350229s Jan 4 11:31:50.926: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.390872621s Jan 4 11:31:52.936: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.401632443s Jan 4 11:31:55.305: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.770673409s Jan 4 11:31:57.326: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.791378176s STEP: Saw pod success Jan 4 11:31:57.326: INFO: Pod "pod-c88c0946-2ee5-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:31:57.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c88c0946-2ee5-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 11:31:57.484: INFO: Waiting for pod pod-c88c0946-2ee5-11ea-9996-0242ac110006 to disappear Jan 4 11:31:57.494: INFO: Pod pod-c88c0946-2ee5-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:31:57.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2nsjc" for this suite. Jan 4 11:32:03.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:32:03.781: INFO: namespace: e2e-tests-emptydir-2nsjc, resource: bindings, ignored listing per whitelist Jan 4 11:32:03.802: INFO: namespace e2e-tests-emptydir-2nsjc deletion completed in 6.297102369s • [SLOW TEST:20.429 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:32:03.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 4 11:32:04.291: INFO: Waiting up to 5m0s for pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006" in namespace "e2e-tests-var-expansion-85fck" to be "success or failure" Jan 4 11:32:04.309: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.268055ms Jan 4 11:32:06.332: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040893637s Jan 4 11:32:08.942: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.651000632s Jan 4 11:32:11.004: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.713399286s Jan 4 11:32:13.024: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.732854688s STEP: Saw pod success Jan 4 11:32:13.024: INFO: Pod "var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:32:13.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006 container dapi-container: STEP: delete the pod Jan 4 11:32:13.680: INFO: Waiting for pod var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006 to disappear Jan 4 11:32:14.084: INFO: Pod var-expansion-d4d5a127-2ee5-11ea-9996-0242ac110006 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:32:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-85fck" for this suite. Jan 4 11:32:20.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:32:20.457: INFO: namespace: e2e-tests-var-expansion-85fck, resource: bindings, ignored listing per whitelist Jan 4 11:32:20.600: INFO: namespace e2e-tests-var-expansion-85fck deletion completed in 6.503703093s • [SLOW TEST:16.798 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:32:20.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0104 11:32:35.313814 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 11:32:35.313: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:32:35.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-l7m5q" for this suite. Jan 4 11:32:51.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:32:53.228: INFO: namespace: e2e-tests-gc-l7m5q, resource: bindings, ignored listing per whitelist Jan 4 11:32:53.232: INFO: namespace e2e-tests-gc-l7m5q deletion completed in 16.458570354s • [SLOW TEST:32.632 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:32:53.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 4 11:32:56.825: INFO: Waiting up to 5m0s for pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-wjpk9" to be "success or failure" Jan 4 11:32:57.905: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 1.079876361s Jan 4 11:32:59.921: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095700328s Jan 4 11:33:02.028: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20300896s Jan 4 11:33:04.041: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.216269493s Jan 4 11:33:06.117: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.291563009s Jan 4 11:33:08.132: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.307439782s Jan 4 11:33:10.195: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.369576654s STEP: Saw pod success Jan 4 11:33:10.195: INFO: Pod "pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:33:10.210: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 11:33:10.438: INFO: Waiting for pod pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006 to disappear Jan 4 11:33:10.457: INFO: Pod pod-f3ebb4a1-2ee5-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:33:10.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wjpk9" for this suite. Jan 4 11:33:16.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:33:16.914: INFO: namespace: e2e-tests-emptydir-wjpk9, resource: bindings, ignored listing per whitelist Jan 4 11:33:16.918: INFO: namespace e2e-tests-emptydir-wjpk9 deletion completed in 6.392722545s • [SLOW TEST:23.685 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:33:16.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:33:17.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-klsn6" for this suite. Jan 4 11:33:41.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:33:41.487: INFO: namespace: e2e-tests-pods-klsn6, resource: bindings, ignored listing per whitelist Jan 4 11:33:41.491: INFO: namespace e2e-tests-pods-klsn6 deletion completed in 24.30587295s • [SLOW TEST:24.573 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:33:41.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0efb86b6-2ee6-11ea-9996-0242ac110006 STEP: Creating configMap with name cm-test-opt-upd-0efb87e0-2ee6-11ea-9996-0242ac110006 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0efb86b6-2ee6-11ea-9996-0242ac110006 STEP: Updating configmap cm-test-opt-upd-0efb87e0-2ee6-11ea-9996-0242ac110006 STEP: Creating configMap with name cm-test-opt-create-0efb87f8-2ee6-11ea-9996-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:33:56.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-khxrd" for this suite. Jan 4 11:34:23.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:34:23.100: INFO: namespace: e2e-tests-projected-khxrd, resource: bindings, ignored listing per whitelist Jan 4 11:34:23.228: INFO: namespace e2e-tests-projected-khxrd deletion completed in 26.403137378s • [SLOW TEST:41.737 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:34:23.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-27d47db6-2ee6-11ea-9996-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 4 11:34:23.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-rzr54" to be "success or failure" Jan 4 11:34:23.482: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.204397ms Jan 4 11:34:25.557: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091589928s Jan 4 11:34:27.568: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103382314s Jan 4 11:34:29.670: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20547999s Jan 4 11:34:31.829: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363744359s Jan 4 11:34:34.024: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.558841209s STEP: Saw pod success Jan 4 11:34:34.024: INFO: Pod "pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:34:34.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 4 11:34:34.399: INFO: Waiting for pod pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006 to disappear Jan 4 11:34:34.408: INFO: Pod pod-configmaps-27db8e05-2ee6-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:34:34.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rzr54" for this suite. Jan 4 11:34:40.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:34:40.823: INFO: namespace: e2e-tests-configmap-rzr54, resource: bindings, ignored listing per whitelist Jan 4 11:34:40.926: INFO: namespace e2e-tests-configmap-rzr54 deletion completed in 6.510440804s • [SLOW TEST:17.698 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:34:40.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-326c8938-2ee6-11ea-9996-0242ac110006 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-326c8938-2ee6-11ea-9996-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:36:15.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8wb6k" for this suite. Jan 4 11:36:39.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:36:40.117: INFO: namespace: e2e-tests-projected-8wb6k, resource: bindings, ignored listing per whitelist Jan 4 11:36:40.222: INFO: namespace e2e-tests-projected-8wb6k deletion completed in 24.290180059s • [SLOW TEST:119.295 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:36:40.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-lgsrt/secret-test-79845e92-2ee6-11ea-9996-0242ac110006 STEP: Creating a pod to test consume secrets Jan 4 11:36:40.558: INFO: Waiting up to 5m0s for pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-lgsrt" to be "success or failure" Jan 4 11:36:40.576: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.075847ms Jan 4 11:36:42.632: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073723791s Jan 4 11:36:44.646: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08766252s Jan 4 11:36:46.678: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120183295s Jan 4 11:36:48.701: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142726484s STEP: Saw pod success Jan 4 11:36:48.701: INFO: Pod "pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:36:48.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006 container env-test: STEP: delete the pod Jan 4 11:36:48.814: INFO: Waiting for pod pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006 to disappear Jan 4 11:36:48.864: INFO: Pod pod-configmaps-7985fd5e-2ee6-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:36:48.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lgsrt" for this suite. Jan 4 11:36:54.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:36:55.032: INFO: namespace: e2e-tests-secrets-lgsrt, resource: bindings, ignored listing per whitelist Jan 4 11:36:55.094: INFO: namespace e2e-tests-secrets-lgsrt deletion completed in 6.215888783s • [SLOW TEST:14.871 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:36:55.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-nz5wq in namespace e2e-tests-proxy-c2wgk I0104 11:36:55.894661 8 runners.go:184] Created replication controller with name: proxy-service-nz5wq, namespace: e2e-tests-proxy-c2wgk, replica count: 1 I0104 11:36:56.946797 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:36:57.947383 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:36:58.947781 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:36:59.948461 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:37:00.949282 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:37:01.950000 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:37:02.950784 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:37:03.951411 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0104 11:37:04.951913 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:37:05.952750 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:37:06.953572 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:37:07.954009 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:37:08.954463 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0104 11:37:09.954995 8 runners.go:184] proxy-service-nz5wq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 4 11:37:09.984: INFO: setup took 14.584138877s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/pods/proxy-service-nz5wq-ljmlv:160/proxy/: foo (200; 60.815767ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/services/http:proxy-service-nz5wq:portname2/proxy/: bar (200; 60.632761ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/services/proxy-service-nz5wq:portname2/proxy/: bar (200; 60.730172ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/pods/proxy-service-nz5wq-ljmlv:162/proxy/: bar (200; 60.90875ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/pods/http:proxy-service-nz5wq-ljmlv:160/proxy/: foo (200; 60.585408ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/pods/http:proxy-service-nz5wq-ljmlv:162/proxy/: bar (200; 60.864815ms) Jan 4 11:37:10.046: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-c2wgk/pods/proxy-service-nz5wq-ljmlv/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 11:37:31.464: INFO: Waiting up to 5m0s for pod "pod-97eda163-2ee6-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-8xc8z" to be "success or failure" Jan 4 11:37:31.484: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.923389ms Jan 4 11:37:33.974: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509798613s Jan 4 11:37:36.214: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74890957s Jan 4 11:37:38.226: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761177848s Jan 4 11:37:40.241: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776603391s Jan 4 11:37:42.258: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.793641547s STEP: Saw pod success Jan 4 11:37:42.258: INFO: Pod "pod-97eda163-2ee6-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:37:42.269: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-97eda163-2ee6-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 11:37:42.420: INFO: Waiting for pod pod-97eda163-2ee6-11ea-9996-0242ac110006 to disappear Jan 4 11:37:42.428: INFO: Pod pod-97eda163-2ee6-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:37:42.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8xc8z" for this suite. Jan 4 11:37:50.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:37:50.702: INFO: namespace: e2e-tests-emptydir-8xc8z, resource: bindings, ignored listing per whitelist Jan 4 11:37:50.735: INFO: namespace e2e-tests-emptydir-8xc8z deletion completed in 8.299270956s • [SLOW TEST:19.578 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:37:50.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-pw2w STEP: Creating a pod to test atomic-volume-subpath Jan 4 11:37:51.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pw2w" in namespace "e2e-tests-subpath-s4tww" to be "success or failure" Jan 4 11:37:51.082: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 37.826205ms Jan 4 11:37:53.099: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054229175s Jan 4 11:37:55.113: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068215544s Jan 4 11:37:57.225: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180754675s Jan 4 11:37:59.248: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203260778s Jan 4 11:38:01.272: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22799466s Jan 4 11:38:03.288: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243535335s Jan 4 11:38:05.303: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.259093551s Jan 4 11:38:07.313: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.269015062s Jan 4 11:38:09.333: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Pending", Reason="", readiness=false. Elapsed: 18.288694023s Jan 4 11:38:11.354: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 20.309701349s Jan 4 11:38:13.371: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 22.326636073s Jan 4 11:38:15.385: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 24.340648242s Jan 4 11:38:17.400: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 26.355371245s Jan 4 11:38:19.418: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 28.373616532s Jan 4 11:38:21.436: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 30.391096575s Jan 4 11:38:23.453: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 32.408137159s Jan 4 11:38:25.472: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 34.427939097s Jan 4 11:38:27.489: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Running", Reason="", readiness=false. Elapsed: 36.444219371s Jan 4 11:38:29.507: INFO: Pod "pod-subpath-test-configmap-pw2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.462106573s STEP: Saw pod success Jan 4 11:38:29.507: INFO: Pod "pod-subpath-test-configmap-pw2w" satisfied condition "success or failure" Jan 4 11:38:29.515: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pw2w container test-container-subpath-configmap-pw2w: STEP: delete the pod Jan 4 11:38:29.740: INFO: Waiting for pod pod-subpath-test-configmap-pw2w to disappear Jan 4 11:38:29.754: INFO: Pod pod-subpath-test-configmap-pw2w no longer exists STEP: Deleting pod pod-subpath-test-configmap-pw2w Jan 4 11:38:29.755: INFO: Deleting pod "pod-subpath-test-configmap-pw2w" in namespace "e2e-tests-subpath-s4tww" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:38:29.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-s4tww" for this suite. Jan 4 11:38:37.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:38:38.178: INFO: namespace: e2e-tests-subpath-s4tww, resource: bindings, ignored listing per whitelist Jan 4 11:38:38.206: INFO: namespace e2e-tests-subpath-s4tww deletion completed in 8.422501247s • [SLOW TEST:47.470 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:38:38.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-srmck Jan 4 11:38:48.871: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-srmck STEP: checking the pod's current state and verifying that restartCount is present Jan 4 11:38:48.878: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:42:50.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-srmck" for this suite. Jan 4 11:42:58.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:42:59.062: INFO: namespace: e2e-tests-container-probe-srmck, resource: bindings, ignored listing per whitelist Jan 4 11:42:59.214: INFO: namespace e2e-tests-container-probe-srmck deletion completed in 8.263927642s • [SLOW TEST:261.008 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:42:59.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-2srbq STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-2srbq STEP: Deleting pre-stop pod Jan 4 11:43:29.165: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:43:29.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-2srbq" for this suite. Jan 4 11:44:09.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:44:09.402: INFO: namespace: e2e-tests-prestop-2srbq, resource: bindings, ignored listing per whitelist Jan 4 11:44:09.419: INFO: namespace e2e-tests-prestop-2srbq deletion completed in 40.152056428s • [SLOW TEST:70.204 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:44:09.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-854e6da2-2ee7-11ea-9996-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 4 11:44:10.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-sxscs" to be "success or failure" Jan 4 11:44:10.046: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 27.358412ms Jan 4 11:44:13.237: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.218372211s Jan 4 11:44:15.280: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.26199068s Jan 4 11:44:17.294: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.275649249s Jan 4 11:44:20.020: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.001363183s Jan 4 11:44:22.030: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.011278608s Jan 4 11:44:24.323: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.30471758s Jan 4 11:44:27.175: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.156599331s Jan 4 11:44:29.189: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.170229456s STEP: Saw pod success Jan 4 11:44:29.189: INFO: Pod "pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:44:29.194: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 4 11:44:30.905: INFO: Waiting for pod pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006 to disappear Jan 4 11:44:31.070: INFO: Pod pod-configmaps-85782bc1-2ee7-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:44:31.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sxscs" for this suite. Jan 4 11:44:37.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:44:37.613: INFO: namespace: e2e-tests-configmap-sxscs, resource: bindings, ignored listing per whitelist Jan 4 11:44:37.624: INFO: namespace e2e-tests-configmap-sxscs deletion completed in 6.537328509s • [SLOW TEST:28.204 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:44:37.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 4 11:44:38.414: INFO: Number of nodes with available pods: 0 Jan 4 11:44:38.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:39.453: INFO: Number of nodes with available pods: 0 Jan 4 11:44:39.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:40.445: INFO: Number of nodes with available pods: 0 Jan 4 11:44:40.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:41.554: INFO: Number of nodes with available pods: 0 Jan 4 11:44:41.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:42.478: INFO: Number of nodes with available pods: 0 Jan 4 11:44:42.478: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:43.436: INFO: Number of nodes with available pods: 0 Jan 4 11:44:43.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:44.453: INFO: Number of nodes with available pods: 0 Jan 4 11:44:44.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:45.461: INFO: Number of nodes with available pods: 0 Jan 4 11:44:45.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:47.435: INFO: Number of nodes with available pods: 0 Jan 4 11:44:47.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:48.564: INFO: Number of nodes with available pods: 0 Jan 4 11:44:48.564: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:49.442: INFO: Number of nodes with available pods: 0 Jan 4 11:44:49.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:50.480: INFO: Number of nodes with available pods: 1 Jan 4 11:44:50.481: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 4 11:44:50.883: INFO: Number of nodes with available pods: 0 Jan 4 11:44:50.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:51.899: INFO: Number of nodes with available pods: 0 Jan 4 11:44:51.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:52.906: INFO: Number of nodes with available pods: 0 Jan 4 11:44:52.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:55.065: INFO: Number of nodes with available pods: 0 Jan 4 11:44:55.065: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:55.936: INFO: Number of nodes with available pods: 0 Jan 4 11:44:55.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:56.929: INFO: Number of nodes with available pods: 0 Jan 4 11:44:56.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:58.305: INFO: Number of nodes with available pods: 0 Jan 4 11:44:58.305: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:59.829: INFO: Number of nodes with available pods: 0 Jan 4 11:44:59.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:44:59.906: INFO: Number of nodes with available pods: 0 Jan 4 11:44:59.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:00.958: INFO: Number of nodes with available pods: 0 Jan 4 11:45:00.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:02.096: INFO: Number of nodes with available pods: 0 Jan 4 11:45:02.096: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:02.914: INFO: Number of nodes with available pods: 0 Jan 4 11:45:02.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:03.927: INFO: Number of nodes with available pods: 0 Jan 4 11:45:03.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:04.902: INFO: Number of nodes with available pods: 0 Jan 4 11:45:04.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:05.913: INFO: Number of nodes with available pods: 0 Jan 4 11:45:05.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:06.933: INFO: Number of nodes with available pods: 0 Jan 4 11:45:06.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:07.909: INFO: Number of nodes with available pods: 0 Jan 4 11:45:07.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:08.904: INFO: Number of nodes with available pods: 0 Jan 4 11:45:08.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:09.914: INFO: Number of nodes with available pods: 0 Jan 4 11:45:09.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:10.909: INFO: Number of nodes with available pods: 0 Jan 4 11:45:10.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:11.969: INFO: Number of nodes with available pods: 0 Jan 4 11:45:11.969: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:13.086: INFO: Number of nodes with available pods: 0 Jan 4 11:45:13.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:15.373: INFO: Number of nodes with available pods: 0 Jan 4 11:45:15.373: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:15.916: INFO: Number of nodes with available pods: 0 Jan 4 11:45:15.916: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:17.041: INFO: Number of nodes with available pods: 0 Jan 4 11:45:17.041: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:17.899: INFO: Number of nodes with available pods: 0 Jan 4 11:45:17.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:18.973: INFO: Number of nodes with available pods: 0 Jan 4 11:45:18.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:19.913: INFO: Number of nodes with available pods: 0 Jan 4 11:45:19.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:22.988: INFO: Number of nodes with available pods: 0 Jan 4 11:45:22.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:23.920: INFO: Number of nodes with available pods: 0 Jan 4 11:45:23.920: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:25.086: INFO: Number of nodes with available pods: 0 Jan 4 11:45:25.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:25.904: INFO: Number of nodes with available pods: 0 Jan 4 11:45:25.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:26.914: INFO: Number of nodes with available pods: 0 Jan 4 11:45:26.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 4 11:45:27.916: INFO: Number of nodes with available pods: 1 Jan 4 11:45:27.916: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-55h67, will wait for the garbage collector to delete the pods Jan 4 11:45:28.006: INFO: Deleting DaemonSet.extensions daemon-set took: 23.696351ms Jan 4 11:45:28.107: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.575617ms Jan 4 11:45:39.421: INFO: Number of nodes with available pods: 0 Jan 4 11:45:39.421: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 11:45:39.512: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-55h67/daemonsets","resourceVersion":"17133085"},"items":null} Jan 4 11:45:39.525: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-55h67/pods","resourceVersion":"17133085"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:45:39.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-55h67" for this suite. Jan 4 11:45:47.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:45:47.676: INFO: namespace: e2e-tests-daemonsets-55h67, resource: bindings, ignored listing per whitelist Jan 4 11:45:47.800: INFO: namespace e2e-tests-daemonsets-55h67 deletion completed in 8.251400572s • [SLOW TEST:70.176 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:45:47.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-6sxs STEP: Creating a pod to test atomic-volume-subpath Jan 4 11:45:48.726: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6sxs" in namespace "e2e-tests-subpath-kx4jq" to be "success or failure" Jan 4 11:45:48.888: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 161.667811ms Jan 4 11:45:50.918: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191916205s Jan 4 11:45:52.977: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250416298s Jan 4 11:45:54.995: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2687833s Jan 4 11:45:58.688: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 9.961640891s Jan 4 11:46:00.973: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246467547s Jan 4 11:46:03.097: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.370030517s Jan 4 11:46:05.115: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.388178423s Jan 4 11:46:07.191: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.464196945s Jan 4 11:46:10.088: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 21.361669419s Jan 4 11:46:12.105: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 23.378451122s Jan 4 11:46:14.459: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 25.732529246s Jan 4 11:46:17.022: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 28.295802544s Jan 4 11:46:19.105: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 30.37858947s Jan 4 11:46:21.131: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Pending", Reason="", readiness=false. Elapsed: 32.40458031s Jan 4 11:46:23.170: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 34.442964589s Jan 4 11:46:25.184: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 36.457491875s Jan 4 11:46:27.195: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 38.46800727s Jan 4 11:46:29.207: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 40.480901813s Jan 4 11:46:31.228: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 42.500975141s Jan 4 11:46:33.243: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 44.516624943s Jan 4 11:46:35.258: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 46.531790673s Jan 4 11:46:37.561: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Running", Reason="", readiness=false. Elapsed: 48.834149567s Jan 4 11:46:39.583: INFO: Pod "pod-subpath-test-configmap-6sxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.856064795s STEP: Saw pod success Jan 4 11:46:39.583: INFO: Pod "pod-subpath-test-configmap-6sxs" satisfied condition "success or failure" Jan 4 11:46:39.588: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-6sxs container test-container-subpath-configmap-6sxs: STEP: delete the pod Jan 4 11:46:40.084: INFO: Waiting for pod pod-subpath-test-configmap-6sxs to disappear Jan 4 11:46:40.115: INFO: Pod pod-subpath-test-configmap-6sxs no longer exists STEP: Deleting pod pod-subpath-test-configmap-6sxs Jan 4 11:46:40.115: INFO: Deleting pod "pod-subpath-test-configmap-6sxs" in namespace "e2e-tests-subpath-kx4jq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:46:40.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kx4jq" for this suite. Jan 4 11:46:48.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:46:48.386: INFO: namespace: e2e-tests-subpath-kx4jq, resource: bindings, ignored listing per whitelist Jan 4 11:46:48.537: INFO: namespace e2e-tests-subpath-kx4jq deletion completed in 8.397554329s • [SLOW TEST:60.736 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:46:48.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-e44f7b8f-2ee7-11ea-9996-0242ac110006 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e44f7b8f-2ee7-11ea-9996-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:48:32.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-b5dcv" for this suite. Jan 4 11:49:00.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:49:00.690: INFO: namespace: e2e-tests-configmap-b5dcv, resource: bindings, ignored listing per whitelist Jan 4 11:49:00.739: INFO: namespace e2e-tests-configmap-b5dcv deletion completed in 28.186197996s • [SLOW TEST:132.202 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:49:00.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nkrvl [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 4 11:49:01.240: INFO: Found 0 stateful pods, waiting for 3 Jan 4 11:49:11.261: INFO: Found 1 stateful pods, waiting for 3 Jan 4 11:49:21.609: INFO: Found 2 stateful pods, waiting for 3 Jan 4 11:49:31.266: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:49:31.266: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:49:31.266: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 4 11:49:41.258: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:49:41.259: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:49:41.259: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 4 11:49:41.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkrvl ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:49:42.011: INFO: stderr: "I0104 11:49:41.509179 46 log.go:172] (0xc0006f0370) (0xc000712640) Create stream\nI0104 11:49:41.509388 46 log.go:172] (0xc0006f0370) (0xc000712640) Stream added, broadcasting: 1\nI0104 11:49:41.538839 46 log.go:172] (0xc0006f0370) Reply frame received for 1\nI0104 11:49:41.538934 46 log.go:172] (0xc0006f0370) (0xc0005d6e60) Create stream\nI0104 11:49:41.538950 46 log.go:172] (0xc0006f0370) (0xc0005d6e60) Stream added, broadcasting: 3\nI0104 11:49:41.541998 46 log.go:172] (0xc0006f0370) Reply frame received for 3\nI0104 11:49:41.542030 46 log.go:172] (0xc0006f0370) (0xc000690000) Create stream\nI0104 11:49:41.542042 46 log.go:172] (0xc0006f0370) (0xc000690000) Stream added, broadcasting: 5\nI0104 11:49:41.545235 46 log.go:172] (0xc0006f0370) Reply frame received for 5\nI0104 11:49:41.876750 46 log.go:172] (0xc0006f0370) Data frame received for 3\nI0104 11:49:41.876825 46 log.go:172] (0xc0005d6e60) (3) Data frame handling\nI0104 11:49:41.876851 46 log.go:172] (0xc0005d6e60) (3) Data frame sent\nI0104 11:49:41.998440 46 log.go:172] (0xc0006f0370) Data frame received for 1\nI0104 11:49:41.998599 46 log.go:172] (0xc000712640) (1) Data frame handling\nI0104 11:49:41.998632 46 log.go:172] (0xc000712640) (1) Data frame sent\nI0104 11:49:41.998659 46 log.go:172] (0xc0006f0370) (0xc000712640) Stream removed, broadcasting: 1\nI0104 11:49:41.999654 46 log.go:172] (0xc0006f0370) (0xc0005d6e60) Stream removed, broadcasting: 3\nI0104 11:49:42.001023 46 log.go:172] (0xc0006f0370) (0xc000690000) Stream removed, broadcasting: 5\nI0104 11:49:42.001114 46 log.go:172] (0xc0006f0370) (0xc000712640) Stream removed, broadcasting: 1\nI0104 11:49:42.001134 46 log.go:172] (0xc0006f0370) (0xc0005d6e60) Stream removed, broadcasting: 3\nI0104 11:49:42.001155 46 log.go:172] (0xc0006f0370) (0xc000690000) Stream removed, broadcasting: 5\n" Jan 4 11:49:42.011: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:49:42.011: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 4 11:49:52.141: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 4 11:50:02.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkrvl ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:50:02.923: INFO: stderr: "I0104 11:50:02.559971 68 log.go:172] (0xc0001389a0) (0xc000501720) Create stream\nI0104 11:50:02.560540 68 log.go:172] (0xc0001389a0) (0xc000501720) Stream added, broadcasting: 1\nI0104 11:50:02.574362 68 log.go:172] (0xc0001389a0) Reply frame received for 1\nI0104 11:50:02.574715 68 log.go:172] (0xc0001389a0) (0xc000852000) Create stream\nI0104 11:50:02.574812 68 log.go:172] (0xc0001389a0) (0xc000852000) Stream added, broadcasting: 3\nI0104 11:50:02.576988 68 log.go:172] (0xc0001389a0) Reply frame received for 3\nI0104 11:50:02.577044 68 log.go:172] (0xc0001389a0) (0xc0008520a0) Create stream\nI0104 11:50:02.577058 68 log.go:172] (0xc0001389a0) (0xc0008520a0) Stream added, broadcasting: 5\nI0104 11:50:02.580070 68 log.go:172] (0xc0001389a0) Reply frame received for 5\nI0104 11:50:02.745900 68 log.go:172] (0xc0001389a0) Data frame received for 3\nI0104 11:50:02.746017 68 log.go:172] (0xc000852000) (3) Data frame handling\nI0104 11:50:02.746041 68 log.go:172] (0xc000852000) (3) Data frame sent\nI0104 11:50:02.914074 68 log.go:172] (0xc0001389a0) Data frame received for 1\nI0104 11:50:02.914201 68 log.go:172] (0xc000501720) (1) Data frame handling\nI0104 11:50:02.914246 68 log.go:172] (0xc000501720) (1) Data frame sent\nI0104 11:50:02.914279 68 log.go:172] (0xc0001389a0) (0xc000501720) Stream removed, broadcasting: 1\nI0104 11:50:02.915628 68 log.go:172] (0xc0001389a0) (0xc000852000) Stream removed, broadcasting: 3\nI0104 11:50:02.915932 68 log.go:172] (0xc0001389a0) (0xc0008520a0) Stream removed, broadcasting: 5\nI0104 11:50:02.916016 68 log.go:172] (0xc0001389a0) (0xc000501720) Stream removed, broadcasting: 1\nI0104 11:50:02.916030 68 log.go:172] (0xc0001389a0) (0xc000852000) Stream removed, broadcasting: 3\nI0104 11:50:02.916039 68 log.go:172] (0xc0001389a0) (0xc0008520a0) Stream removed, broadcasting: 5\nI0104 11:50:02.916277 68 log.go:172] (0xc0001389a0) Go away received\n" Jan 4 11:50:02.923: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 11:50:02.923: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 11:50:13.100: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:50:13.101: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:13.101: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:23.155: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:50:23.155: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:23.155: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:33.134: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:50:33.134: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:43.133: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:50:43.133: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 4 11:50:53.126: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update STEP: Rolling back to a previous revision Jan 4 11:51:03.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkrvl ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 4 11:51:03.876: INFO: stderr: "I0104 11:51:03.291178 89 log.go:172] (0xc0006e4370) (0xc0005832c0) Create stream\nI0104 11:51:03.291508 89 log.go:172] (0xc0006e4370) (0xc0005832c0) Stream added, broadcasting: 1\nI0104 11:51:03.324867 89 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0104 11:51:03.325014 89 log.go:172] (0xc0006e4370) (0xc000638000) Create stream\nI0104 11:51:03.325107 89 log.go:172] (0xc0006e4370) (0xc000638000) Stream added, broadcasting: 3\nI0104 11:51:03.326835 89 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0104 11:51:03.326899 89 log.go:172] (0xc0006e4370) (0xc0005a2000) Create stream\nI0104 11:51:03.326909 89 log.go:172] (0xc0006e4370) (0xc0005a2000) Stream added, broadcasting: 5\nI0104 11:51:03.328937 89 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0104 11:51:03.647703 89 log.go:172] (0xc0006e4370) Data frame received for 3\nI0104 11:51:03.647763 89 log.go:172] (0xc000638000) (3) Data frame handling\nI0104 11:51:03.647794 89 log.go:172] (0xc000638000) (3) Data frame sent\nI0104 11:51:03.860951 89 log.go:172] (0xc0006e4370) (0xc000638000) Stream removed, broadcasting: 3\nI0104 11:51:03.861224 89 log.go:172] (0xc0006e4370) Data frame received for 1\nI0104 11:51:03.861264 89 log.go:172] (0xc0006e4370) (0xc0005a2000) Stream removed, broadcasting: 5\nI0104 11:51:03.861398 89 log.go:172] (0xc0005832c0) (1) Data frame handling\nI0104 11:51:03.861501 89 log.go:172] (0xc0005832c0) (1) Data frame sent\nI0104 11:51:03.861525 89 log.go:172] (0xc0006e4370) (0xc0005832c0) Stream removed, broadcasting: 1\nI0104 11:51:03.861554 89 log.go:172] (0xc0006e4370) Go away received\nI0104 11:51:03.862691 89 log.go:172] (0xc0006e4370) (0xc0005832c0) Stream removed, broadcasting: 1\nI0104 11:51:03.862871 89 log.go:172] (0xc0006e4370) (0xc000638000) Stream removed, broadcasting: 3\nI0104 11:51:03.862908 89 log.go:172] (0xc0006e4370) (0xc0005a2000) Stream removed, broadcasting: 5\n" Jan 4 11:51:03.877: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 4 11:51:03.877: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 4 11:51:14.040: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 4 11:51:24.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkrvl ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 4 11:51:24.783: INFO: stderr: "I0104 11:51:24.385691 109 log.go:172] (0xc00072a370) (0xc000149540) Create stream\nI0104 11:51:24.385854 109 log.go:172] (0xc00072a370) (0xc000149540) Stream added, broadcasting: 1\nI0104 11:51:24.391828 109 log.go:172] (0xc00072a370) Reply frame received for 1\nI0104 11:51:24.391854 109 log.go:172] (0xc00072a370) (0xc00034c000) Create stream\nI0104 11:51:24.391861 109 log.go:172] (0xc00072a370) (0xc00034c000) Stream added, broadcasting: 3\nI0104 11:51:24.393637 109 log.go:172] (0xc00072a370) Reply frame received for 3\nI0104 11:51:24.393670 109 log.go:172] (0xc00072a370) (0xc0006c4000) Create stream\nI0104 11:51:24.393685 109 log.go:172] (0xc00072a370) (0xc0006c4000) Stream added, broadcasting: 5\nI0104 11:51:24.394996 109 log.go:172] (0xc00072a370) Reply frame received for 5\nI0104 11:51:24.581257 109 log.go:172] (0xc00072a370) Data frame received for 3\nI0104 11:51:24.581795 109 log.go:172] (0xc00034c000) (3) Data frame handling\nI0104 11:51:24.581889 109 log.go:172] (0xc00034c000) (3) Data frame sent\nI0104 11:51:24.774443 109 log.go:172] (0xc00072a370) Data frame received for 1\nI0104 11:51:24.774643 109 log.go:172] (0xc00072a370) (0xc00034c000) Stream removed, broadcasting: 3\nI0104 11:51:24.774731 109 log.go:172] (0xc000149540) (1) Data frame handling\nI0104 11:51:24.774755 109 log.go:172] (0xc000149540) (1) Data frame sent\nI0104 11:51:24.774783 109 log.go:172] (0xc00072a370) (0xc0006c4000) Stream removed, broadcasting: 5\nI0104 11:51:24.774809 109 log.go:172] (0xc00072a370) (0xc000149540) Stream removed, broadcasting: 1\nI0104 11:51:24.774870 109 log.go:172] (0xc00072a370) Go away received\nI0104 11:51:24.775186 109 log.go:172] (0xc00072a370) (0xc000149540) Stream removed, broadcasting: 1\nI0104 11:51:24.775224 109 log.go:172] (0xc00072a370) (0xc00034c000) Stream removed, broadcasting: 3\nI0104 11:51:24.775244 109 log.go:172] (0xc00072a370) (0xc0006c4000) Stream removed, broadcasting: 5\n" Jan 4 11:51:24.783: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 4 11:51:24.783: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 4 11:51:34.881: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:51:34.881: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:51:34.881: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:51:45.940: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:51:45.941: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:51:45.941: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:51:54.909: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:51:54.909: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:52:04.932: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update Jan 4 11:52:04.932: INFO: Waiting for Pod e2e-tests-statefulset-nkrvl/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 4 11:52:14.911: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkrvl/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 4 11:52:24.957: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nkrvl Jan 4 11:52:24.962: INFO: Scaling statefulset ss2 to 0 Jan 4 11:52:45.009: INFO: Waiting for statefulset status.replicas updated to 0 Jan 4 11:52:45.015: INFO: Waiting for stateful set status.replicas to become 0, currently 1 Jan 4 11:52:55.030: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:52:55.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nkrvl" for this suite. Jan 4 11:53:03.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:53:03.280: INFO: namespace: e2e-tests-statefulset-nkrvl, resource: bindings, ignored listing per whitelist Jan 4 11:53:03.345: INFO: namespace e2e-tests-statefulset-nkrvl deletion completed in 8.258007272s • [SLOW TEST:242.606 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:53:03.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:54:03.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rcckm" for this suite. Jan 4 11:54:27.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:54:27.549: INFO: namespace: e2e-tests-container-probe-rcckm, resource: bindings, ignored listing per whitelist Jan 4 11:54:27.871: INFO: namespace e2e-tests-container-probe-rcckm deletion completed in 24.404073875s • [SLOW TEST:84.525 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:54:27.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 4 11:54:28.049: INFO: Waiting up to 5m0s for pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-5xbxw" to be "success or failure" Jan 4 11:54:28.060: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.175226ms Jan 4 11:54:30.549: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499909891s Jan 4 11:54:32.594: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545585758s Jan 4 11:54:34.923: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.874329836s Jan 4 11:54:36.953: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904471474s Jan 4 11:54:38.974: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.925248178s Jan 4 11:54:41.529: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.479990073s STEP: Saw pod success Jan 4 11:54:41.529: INFO: Pod "downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:54:41.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006 container dapi-container: STEP: delete the pod Jan 4 11:54:42.277: INFO: Waiting for pod downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006 to disappear Jan 4 11:54:42.288: INFO: Pod downward-api-f5d50c3c-2ee8-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:54:42.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5xbxw" for this suite. Jan 4 11:54:48.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:54:48.384: INFO: namespace: e2e-tests-downward-api-5xbxw, resource: bindings, ignored listing per whitelist Jan 4 11:54:48.454: INFO: namespace e2e-tests-downward-api-5xbxw deletion completed in 6.153622644s • [SLOW TEST:20.583 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:54:48.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4kq85 Jan 4 11:55:00.926: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4kq85 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 11:55:00.931: INFO: Initial restart count of pod liveness-exec is 0 Jan 4 11:55:58.279: INFO: Restart count of pod e2e-tests-container-probe-4kq85/liveness-exec is now 1 (57.348062433s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:55:58.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4kq85" for this suite. Jan 4 11:56:06.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:56:06.765: INFO: namespace: e2e-tests-container-probe-4kq85, resource: bindings, ignored listing per whitelist Jan 4 11:56:06.978: INFO: namespace e2e-tests-container-probe-4kq85 deletion completed in 8.496486904s • [SLOW TEST:78.523 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:56:06.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 11:56:07.092: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:56:17.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4vfv9" for this suite. Jan 4 11:57:01.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:02.002: INFO: namespace: e2e-tests-pods-4vfv9, resource: bindings, ignored listing per whitelist Jan 4 11:57:02.006: INFO: namespace e2e-tests-pods-4vfv9 deletion completed in 44.266734149s • [SLOW TEST:55.027 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:57:02.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-51c0f835-2ee9-11ea-9996-0242ac110006 STEP: Creating a pod to test consume secrets Jan 4 11:57:02.276: INFO: Waiting up to 5m0s for pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-j7kj9" to be "success or failure" Jan 4 11:57:02.327: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 50.376128ms Jan 4 11:57:04.347: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071001939s Jan 4 11:57:06.359: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082896911s Jan 4 11:57:08.371: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095023371s Jan 4 11:57:10.441: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164651522s Jan 4 11:57:12.471: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19452701s Jan 4 11:57:14.545: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.268123141s STEP: Saw pod success Jan 4 11:57:14.545: INFO: Pod "pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:57:14.570: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006 container secret-volume-test: STEP: delete the pod Jan 4 11:57:14.680: INFO: Waiting for pod pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006 to disappear Jan 4 11:57:15.826: INFO: Pod pod-secrets-51c1e4a3-2ee9-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:57:15.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j7kj9" for this suite. Jan 4 11:57:22.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:22.479: INFO: namespace: e2e-tests-secrets-j7kj9, resource: bindings, ignored listing per whitelist Jan 4 11:57:22.706: INFO: namespace e2e-tests-secrets-j7kj9 deletion completed in 6.864373497s • [SLOW TEST:20.700 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:57:22.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 4 11:57:22.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 4 11:57:23.146: INFO: stderr: "" Jan 4 11:57:23.146: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:57:23.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rvrcs" for this suite. Jan 4 11:57:31.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:57:31.300: INFO: namespace: e2e-tests-kubectl-rvrcs, resource: bindings, ignored listing per whitelist Jan 4 11:57:31.400: INFO: namespace e2e-tests-kubectl-rvrcs deletion completed in 8.228458334s • [SLOW TEST:8.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:57:31.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 4 11:57:45.725: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6344a1f8-2ee9-11ea-9996-0242ac110006", GenerateName:"", Namespace:"e2e-tests-pods-v5gj7", SelfLink:"/api/v1/namespaces/e2e-tests-pods-v5gj7/pods/pod-submit-remove-6344a1f8-2ee9-11ea-9996-0242ac110006", UID:"6346b76b-2ee9-11ea-a994-fa163e34d433", ResourceVersion:"17134512", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713735851, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"592206730"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-q7m6p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d1f8c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q7m6p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d35568), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ccdb60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d355a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d355c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d355c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d355cc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713735851, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713735864, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713735864, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713735851, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001fbda40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fbda60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://0831aeebc01fc0424a5f3620785eee668ef1606c385f45be648092345f7d76b1"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:58:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v5gj7" for this suite. Jan 4 11:58:10.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:58:10.981: INFO: namespace: e2e-tests-pods-v5gj7, resource: bindings, ignored listing per whitelist Jan 4 11:58:11.175: INFO: namespace e2e-tests-pods-v5gj7 deletion completed in 8.339106803s • [SLOW TEST:39.775 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:58:11.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 11:58:25.020: INFO: Waiting up to 5m0s for pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006" in namespace "e2e-tests-pods-fnn4g" to be "success or failure" Jan 4 11:58:25.036: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.809497ms Jan 4 11:58:28.042: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.022223383s Jan 4 11:58:30.346: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.325499892s Jan 4 11:58:32.373: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.353171244s Jan 4 11:58:34.385: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.364940765s Jan 4 11:58:36.400: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.380249246s Jan 4 11:58:39.829: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.809114224s Jan 4 11:58:41.870: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.84987055s STEP: Saw pod success Jan 4 11:58:41.870: INFO: Pod "client-envvars-831a042c-2ee9-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 11:58:41.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-831a042c-2ee9-11ea-9996-0242ac110006 container env3cont: STEP: delete the pod Jan 4 11:58:43.142: INFO: Waiting for pod client-envvars-831a042c-2ee9-11ea-9996-0242ac110006 to disappear Jan 4 11:58:43.161: INFO: Pod client-envvars-831a042c-2ee9-11ea-9996-0242ac110006 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:58:43.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fnn4g" for this suite. Jan 4 11:59:39.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:59:39.265: INFO: namespace: e2e-tests-pods-fnn4g, resource: bindings, ignored listing per whitelist Jan 4 11:59:39.410: INFO: namespace e2e-tests-pods-fnn4g deletion completed in 56.245713831s • [SLOW TEST:88.234 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:59:39.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 11:59:39.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 4 11:59:39.733: INFO: stderr: "" Jan 4 11:59:39.733: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 4 11:59:39.741: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:59:39.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jkmbl" for this suite. Jan 4 11:59:45.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 11:59:45.931: INFO: namespace: e2e-tests-kubectl-jkmbl, resource: bindings, ignored listing per whitelist Jan 4 11:59:45.994: INFO: namespace e2e-tests-kubectl-jkmbl deletion completed in 6.236481272s S [SKIPPING] [6.583 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 11:59:39.741: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 11:59:45.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 11:59:58.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gmfk9" for this suite. Jan 4 12:00:56.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:00:56.687: INFO: namespace: e2e-tests-kubelet-test-gmfk9, resource: bindings, ignored listing per whitelist Jan 4 12:00:56.772: INFO: namespace e2e-tests-kubelet-test-gmfk9 deletion completed in 58.420466063s • [SLOW TEST:70.778 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:00:56.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 4 12:00:57.260: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:01:23.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jsv9c" for this suite. Jan 4 12:01:47.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:01:47.320: INFO: namespace: e2e-tests-init-container-jsv9c, resource: bindings, ignored listing per whitelist Jan 4 12:01:47.361: INFO: namespace e2e-tests-init-container-jsv9c deletion completed in 24.24839542s • [SLOW TEST:50.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:01:47.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 4 12:01:47.815: INFO: Waiting up to 5m0s for pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006" in namespace "e2e-tests-containers-kjm82" to be "success or failure" Jan 4 12:01:47.852: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 36.67213ms Jan 4 12:01:50.360: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54484822s Jan 4 12:01:53.078: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.262993629s Jan 4 12:01:55.104: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.28867003s Jan 4 12:01:58.432: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.616234484s Jan 4 12:02:01.409: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.59306915s Jan 4 12:02:05.363: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.547196671s Jan 4 12:02:07.375: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.55957853s Jan 4 12:02:10.742: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.926947714s Jan 4 12:02:12.749: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.933839467s STEP: Saw pod success Jan 4 12:02:12.749: INFO: Pod "client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:02:12.752: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 12:02:15.667: INFO: Waiting for pod client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006 to disappear Jan 4 12:02:15.743: INFO: Pod client-containers-fbe1614d-2ee9-11ea-9996-0242ac110006 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:02:15.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-kjm82" for this suite. Jan 4 12:02:24.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:02:24.964: INFO: namespace: e2e-tests-containers-kjm82, resource: bindings, ignored listing per whitelist Jan 4 12:02:24.964: INFO: namespace e2e-tests-containers-kjm82 deletion completed in 8.526718963s • [SLOW TEST:37.603 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:02:24.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 4 12:03:04.080: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:04.080: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:04.168172 8 log.go:172] (0xc000957810) (0xc001898b40) Create stream I0104 12:03:04.168359 8 log.go:172] (0xc000957810) (0xc001898b40) Stream added, broadcasting: 1 I0104 12:03:04.175434 8 log.go:172] (0xc000957810) Reply frame received for 1 I0104 12:03:04.175474 8 log.go:172] (0xc000957810) (0xc001898c80) Create stream I0104 12:03:04.175482 8 log.go:172] (0xc000957810) (0xc001898c80) Stream added, broadcasting: 3 I0104 12:03:04.176866 8 log.go:172] (0xc000957810) Reply frame received for 3 I0104 12:03:04.176931 8 log.go:172] (0xc000957810) (0xc001898d20) Create stream I0104 12:03:04.176943 8 log.go:172] (0xc000957810) (0xc001898d20) Stream added, broadcasting: 5 I0104 12:03:04.178136 8 log.go:172] (0xc000957810) Reply frame received for 5 I0104 12:03:04.333726 8 log.go:172] (0xc000957810) Data frame received for 3 I0104 12:03:04.333809 8 log.go:172] (0xc001898c80) (3) Data frame handling I0104 12:03:04.333853 8 log.go:172] (0xc001898c80) (3) Data frame sent I0104 12:03:04.492593 8 log.go:172] (0xc000957810) (0xc001898c80) Stream removed, broadcasting: 3 I0104 12:03:04.492762 8 log.go:172] (0xc000957810) Data frame received for 1 I0104 12:03:04.492777 8 log.go:172] (0xc001898b40) (1) Data frame handling I0104 12:03:04.492796 8 log.go:172] (0xc001898b40) (1) Data frame sent I0104 12:03:04.492931 8 log.go:172] (0xc000957810) (0xc001898b40) Stream removed, broadcasting: 1 I0104 12:03:04.493188 8 log.go:172] (0xc000957810) (0xc001898d20) Stream removed, broadcasting: 5 I0104 12:03:04.493227 8 log.go:172] (0xc000957810) Go away received I0104 12:03:04.493436 8 log.go:172] (0xc000957810) (0xc001898b40) Stream removed, broadcasting: 1 I0104 12:03:04.493495 8 log.go:172] (0xc000957810) (0xc001898c80) Stream removed, broadcasting: 3 I0104 12:03:04.493501 8 log.go:172] (0xc000957810) (0xc001898d20) Stream removed, broadcasting: 5 Jan 4 12:03:04.493: INFO: Exec stderr: "" Jan 4 12:03:04.493: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:04.493: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:04.641483 8 log.go:172] (0xc0003c8580) (0xc001e30b40) Create stream I0104 12:03:04.641631 8 log.go:172] (0xc0003c8580) (0xc001e30b40) Stream added, broadcasting: 1 I0104 12:03:04.649498 8 log.go:172] (0xc0003c8580) Reply frame received for 1 I0104 12:03:04.649555 8 log.go:172] (0xc0003c8580) (0xc001e30be0) Create stream I0104 12:03:04.649565 8 log.go:172] (0xc0003c8580) (0xc001e30be0) Stream added, broadcasting: 3 I0104 12:03:04.651356 8 log.go:172] (0xc0003c8580) Reply frame received for 3 I0104 12:03:04.651421 8 log.go:172] (0xc0003c8580) (0xc001cbb7c0) Create stream I0104 12:03:04.651434 8 log.go:172] (0xc0003c8580) (0xc001cbb7c0) Stream added, broadcasting: 5 I0104 12:03:04.652740 8 log.go:172] (0xc0003c8580) Reply frame received for 5 I0104 12:03:04.800902 8 log.go:172] (0xc0003c8580) Data frame received for 3 I0104 12:03:04.800974 8 log.go:172] (0xc001e30be0) (3) Data frame handling I0104 12:03:04.800996 8 log.go:172] (0xc001e30be0) (3) Data frame sent I0104 12:03:04.973814 8 log.go:172] (0xc0003c8580) Data frame received for 1 I0104 12:03:04.973993 8 log.go:172] (0xc0003c8580) (0xc001cbb7c0) Stream removed, broadcasting: 5 I0104 12:03:04.974053 8 log.go:172] (0xc001e30b40) (1) Data frame handling I0104 12:03:04.974076 8 log.go:172] (0xc001e30b40) (1) Data frame sent I0104 12:03:04.974152 8 log.go:172] (0xc0003c8580) (0xc001e30be0) Stream removed, broadcasting: 3 I0104 12:03:04.974180 8 log.go:172] (0xc0003c8580) (0xc001e30b40) Stream removed, broadcasting: 1 I0104 12:03:04.974203 8 log.go:172] (0xc0003c8580) Go away received I0104 12:03:04.974503 8 log.go:172] (0xc0003c8580) (0xc001e30b40) Stream removed, broadcasting: 1 I0104 12:03:04.974526 8 log.go:172] (0xc0003c8580) (0xc001e30be0) Stream removed, broadcasting: 3 I0104 12:03:04.974535 8 log.go:172] (0xc0003c8580) (0xc001cbb7c0) Stream removed, broadcasting: 5 Jan 4 12:03:04.974: INFO: Exec stderr: "" Jan 4 12:03:04.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:04.974: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:05.081828 8 log.go:172] (0xc000531d90) (0xc001cbba40) Create stream I0104 12:03:05.081910 8 log.go:172] (0xc000531d90) (0xc001cbba40) Stream added, broadcasting: 1 I0104 12:03:05.088468 8 log.go:172] (0xc000531d90) Reply frame received for 1 I0104 12:03:05.088612 8 log.go:172] (0xc000531d90) (0xc001805a40) Create stream I0104 12:03:05.088638 8 log.go:172] (0xc000531d90) (0xc001805a40) Stream added, broadcasting: 3 I0104 12:03:05.091391 8 log.go:172] (0xc000531d90) Reply frame received for 3 I0104 12:03:05.091465 8 log.go:172] (0xc000531d90) (0xc001e30d20) Create stream I0104 12:03:05.091489 8 log.go:172] (0xc000531d90) (0xc001e30d20) Stream added, broadcasting: 5 I0104 12:03:05.093170 8 log.go:172] (0xc000531d90) Reply frame received for 5 I0104 12:03:05.220900 8 log.go:172] (0xc000531d90) Data frame received for 3 I0104 12:03:05.220998 8 log.go:172] (0xc001805a40) (3) Data frame handling I0104 12:03:05.221017 8 log.go:172] (0xc001805a40) (3) Data frame sent I0104 12:03:05.353568 8 log.go:172] (0xc000531d90) (0xc001805a40) Stream removed, broadcasting: 3 I0104 12:03:05.353782 8 log.go:172] (0xc000531d90) Data frame received for 1 I0104 12:03:05.353796 8 log.go:172] (0xc001cbba40) (1) Data frame handling I0104 12:03:05.353828 8 log.go:172] (0xc001cbba40) (1) Data frame sent I0104 12:03:05.353835 8 log.go:172] (0xc000531d90) (0xc001cbba40) Stream removed, broadcasting: 1 I0104 12:03:05.354047 8 log.go:172] (0xc000531d90) (0xc001e30d20) Stream removed, broadcasting: 5 I0104 12:03:05.354090 8 log.go:172] (0xc000531d90) (0xc001cbba40) Stream removed, broadcasting: 1 I0104 12:03:05.354103 8 log.go:172] (0xc000531d90) (0xc001805a40) Stream removed, broadcasting: 3 I0104 12:03:05.354116 8 log.go:172] (0xc000531d90) (0xc001e30d20) Stream removed, broadcasting: 5 I0104 12:03:05.354157 8 log.go:172] (0xc000531d90) Go away received Jan 4 12:03:05.354: INFO: Exec stderr: "" Jan 4 12:03:05.354: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:05.354: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:05.421352 8 log.go:172] (0xc001d362c0) (0xc001805d60) Create stream I0104 12:03:05.421389 8 log.go:172] (0xc001d362c0) (0xc001805d60) Stream added, broadcasting: 1 I0104 12:03:05.425246 8 log.go:172] (0xc001d362c0) Reply frame received for 1 I0104 12:03:05.425287 8 log.go:172] (0xc001d362c0) (0xc001805e00) Create stream I0104 12:03:05.425300 8 log.go:172] (0xc001d362c0) (0xc001805e00) Stream added, broadcasting: 3 I0104 12:03:05.426146 8 log.go:172] (0xc001d362c0) Reply frame received for 3 I0104 12:03:05.426182 8 log.go:172] (0xc001d362c0) (0xc001e30dc0) Create stream I0104 12:03:05.426196 8 log.go:172] (0xc001d362c0) (0xc001e30dc0) Stream added, broadcasting: 5 I0104 12:03:05.427359 8 log.go:172] (0xc001d362c0) Reply frame received for 5 I0104 12:03:05.585683 8 log.go:172] (0xc001d362c0) Data frame received for 3 I0104 12:03:05.585737 8 log.go:172] (0xc001805e00) (3) Data frame handling I0104 12:03:05.585765 8 log.go:172] (0xc001805e00) (3) Data frame sent I0104 12:03:05.705221 8 log.go:172] (0xc001d362c0) Data frame received for 1 I0104 12:03:05.705265 8 log.go:172] (0xc001805d60) (1) Data frame handling I0104 12:03:05.705281 8 log.go:172] (0xc001805d60) (1) Data frame sent I0104 12:03:05.705295 8 log.go:172] (0xc001d362c0) (0xc001805d60) Stream removed, broadcasting: 1 I0104 12:03:05.705715 8 log.go:172] (0xc001d362c0) (0xc001e30dc0) Stream removed, broadcasting: 5 I0104 12:03:05.705745 8 log.go:172] (0xc001d362c0) (0xc001805e00) Stream removed, broadcasting: 3 I0104 12:03:05.705791 8 log.go:172] (0xc001d362c0) (0xc001805d60) Stream removed, broadcasting: 1 I0104 12:03:05.705801 8 log.go:172] (0xc001d362c0) (0xc001805e00) Stream removed, broadcasting: 3 I0104 12:03:05.705805 8 log.go:172] (0xc001d362c0) (0xc001e30dc0) Stream removed, broadcasting: 5 I0104 12:03:05.706142 8 log.go:172] (0xc001d362c0) Go away received Jan 4 12:03:05.706: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 4 12:03:05.706: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:05.706: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:05.784607 8 log.go:172] (0xc001d36790) (0xc0024a2000) Create stream I0104 12:03:05.784686 8 log.go:172] (0xc001d36790) (0xc0024a2000) Stream added, broadcasting: 1 I0104 12:03:05.794443 8 log.go:172] (0xc001d36790) Reply frame received for 1 I0104 12:03:05.794523 8 log.go:172] (0xc001d36790) (0xc000bcc640) Create stream I0104 12:03:05.794558 8 log.go:172] (0xc001d36790) (0xc000bcc640) Stream added, broadcasting: 3 I0104 12:03:05.797605 8 log.go:172] (0xc001d36790) Reply frame received for 3 I0104 12:03:05.797629 8 log.go:172] (0xc001d36790) (0xc000bcc780) Create stream I0104 12:03:05.797638 8 log.go:172] (0xc001d36790) (0xc000bcc780) Stream added, broadcasting: 5 I0104 12:03:05.799209 8 log.go:172] (0xc001d36790) Reply frame received for 5 I0104 12:03:05.911707 8 log.go:172] (0xc001d36790) Data frame received for 3 I0104 12:03:05.911765 8 log.go:172] (0xc000bcc640) (3) Data frame handling I0104 12:03:05.911794 8 log.go:172] (0xc000bcc640) (3) Data frame sent I0104 12:03:06.035706 8 log.go:172] (0xc001d36790) (0xc000bcc780) Stream removed, broadcasting: 5 I0104 12:03:06.035798 8 log.go:172] (0xc001d36790) Data frame received for 1 I0104 12:03:06.035838 8 log.go:172] (0xc0024a2000) (1) Data frame handling I0104 12:03:06.035863 8 log.go:172] (0xc001d36790) (0xc000bcc640) Stream removed, broadcasting: 3 I0104 12:03:06.036000 8 log.go:172] (0xc0024a2000) (1) Data frame sent I0104 12:03:06.036153 8 log.go:172] (0xc001d36790) (0xc0024a2000) Stream removed, broadcasting: 1 I0104 12:03:06.036242 8 log.go:172] (0xc001d36790) Go away received I0104 12:03:06.036560 8 log.go:172] (0xc001d36790) (0xc0024a2000) Stream removed, broadcasting: 1 I0104 12:03:06.036577 8 log.go:172] (0xc001d36790) (0xc000bcc640) Stream removed, broadcasting: 3 I0104 12:03:06.036587 8 log.go:172] (0xc001d36790) (0xc000bcc780) Stream removed, broadcasting: 5 Jan 4 12:03:06.036: INFO: Exec stderr: "" Jan 4 12:03:06.036: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:06.036: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:06.144036 8 log.go:172] (0xc001d36c60) (0xc0024a2280) Create stream I0104 12:03:06.144134 8 log.go:172] (0xc001d36c60) (0xc0024a2280) Stream added, broadcasting: 1 I0104 12:03:06.149457 8 log.go:172] (0xc001d36c60) Reply frame received for 1 I0104 12:03:06.149489 8 log.go:172] (0xc001d36c60) (0xc00001caa0) Create stream I0104 12:03:06.149496 8 log.go:172] (0xc001d36c60) (0xc00001caa0) Stream added, broadcasting: 3 I0104 12:03:06.150305 8 log.go:172] (0xc001d36c60) Reply frame received for 3 I0104 12:03:06.150326 8 log.go:172] (0xc001d36c60) (0xc0024a2320) Create stream I0104 12:03:06.150333 8 log.go:172] (0xc001d36c60) (0xc0024a2320) Stream added, broadcasting: 5 I0104 12:03:06.151011 8 log.go:172] (0xc001d36c60) Reply frame received for 5 I0104 12:03:06.313136 8 log.go:172] (0xc001d36c60) Data frame received for 3 I0104 12:03:06.313326 8 log.go:172] (0xc00001caa0) (3) Data frame handling I0104 12:03:06.313668 8 log.go:172] (0xc00001caa0) (3) Data frame sent I0104 12:03:06.429217 8 log.go:172] (0xc001d36c60) Data frame received for 1 I0104 12:03:06.429309 8 log.go:172] (0xc001d36c60) (0xc00001caa0) Stream removed, broadcasting: 3 I0104 12:03:06.429359 8 log.go:172] (0xc0024a2280) (1) Data frame handling I0104 12:03:06.429407 8 log.go:172] (0xc0024a2280) (1) Data frame sent I0104 12:03:06.429426 8 log.go:172] (0xc001d36c60) (0xc0024a2320) Stream removed, broadcasting: 5 I0104 12:03:06.429464 8 log.go:172] (0xc001d36c60) (0xc0024a2280) Stream removed, broadcasting: 1 I0104 12:03:06.429491 8 log.go:172] (0xc001d36c60) Go away received I0104 12:03:06.429673 8 log.go:172] (0xc001d36c60) (0xc0024a2280) Stream removed, broadcasting: 1 I0104 12:03:06.429687 8 log.go:172] (0xc001d36c60) (0xc00001caa0) Stream removed, broadcasting: 3 I0104 12:03:06.429698 8 log.go:172] (0xc001d36c60) (0xc0024a2320) Stream removed, broadcasting: 5 Jan 4 12:03:06.429: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 4 12:03:06.429: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:06.429: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:06.562727 8 log.go:172] (0xc000b2a2c0) (0xc00001cf00) Create stream I0104 12:03:06.562854 8 log.go:172] (0xc000b2a2c0) (0xc00001cf00) Stream added, broadcasting: 1 I0104 12:03:06.572151 8 log.go:172] (0xc000b2a2c0) Reply frame received for 1 I0104 12:03:06.572327 8 log.go:172] (0xc000b2a2c0) (0xc000bcc8c0) Create stream I0104 12:03:06.572346 8 log.go:172] (0xc000b2a2c0) (0xc000bcc8c0) Stream added, broadcasting: 3 I0104 12:03:06.577302 8 log.go:172] (0xc000b2a2c0) Reply frame received for 3 I0104 12:03:06.577491 8 log.go:172] (0xc000b2a2c0) (0xc001cbbae0) Create stream I0104 12:03:06.577505 8 log.go:172] (0xc000b2a2c0) (0xc001cbbae0) Stream added, broadcasting: 5 I0104 12:03:06.581429 8 log.go:172] (0xc000b2a2c0) Reply frame received for 5 I0104 12:03:06.736764 8 log.go:172] (0xc000b2a2c0) Data frame received for 3 I0104 12:03:06.736817 8 log.go:172] (0xc000bcc8c0) (3) Data frame handling I0104 12:03:06.736837 8 log.go:172] (0xc000bcc8c0) (3) Data frame sent I0104 12:03:06.863266 8 log.go:172] (0xc000b2a2c0) (0xc000bcc8c0) Stream removed, broadcasting: 3 I0104 12:03:06.863452 8 log.go:172] (0xc000b2a2c0) Data frame received for 1 I0104 12:03:06.863605 8 log.go:172] (0xc00001cf00) (1) Data frame handling I0104 12:03:06.863673 8 log.go:172] (0xc00001cf00) (1) Data frame sent I0104 12:03:06.863706 8 log.go:172] (0xc000b2a2c0) (0xc00001cf00) Stream removed, broadcasting: 1 I0104 12:03:06.864858 8 log.go:172] (0xc000b2a2c0) (0xc001cbbae0) Stream removed, broadcasting: 5 I0104 12:03:06.864905 8 log.go:172] (0xc000b2a2c0) (0xc00001cf00) Stream removed, broadcasting: 1 I0104 12:03:06.864913 8 log.go:172] (0xc000b2a2c0) (0xc000bcc8c0) Stream removed, broadcasting: 3 I0104 12:03:06.864923 8 log.go:172] (0xc000b2a2c0) (0xc001cbbae0) Stream removed, broadcasting: 5 Jan 4 12:03:06.865: INFO: Exec stderr: "" Jan 4 12:03:06.865: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:06.865: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:07.086185 8 log.go:172] (0xc001fbe370) (0xc001cbbe00) Create stream I0104 12:03:07.086278 8 log.go:172] (0xc001fbe370) (0xc001cbbe00) Stream added, broadcasting: 1 I0104 12:03:07.092732 8 log.go:172] (0xc001fbe370) Reply frame received for 1 I0104 12:03:07.092816 8 log.go:172] (0xc001fbe370) (0xc00001cfa0) Create stream I0104 12:03:07.092838 8 log.go:172] (0xc001fbe370) (0xc00001cfa0) Stream added, broadcasting: 3 I0104 12:03:07.094296 8 log.go:172] (0xc001fbe370) Reply frame received for 3 I0104 12:03:07.094337 8 log.go:172] (0xc001fbe370) (0xc000bcca00) Create stream I0104 12:03:07.094346 8 log.go:172] (0xc001fbe370) (0xc000bcca00) Stream added, broadcasting: 5 I0104 12:03:07.095300 8 log.go:172] (0xc001fbe370) Reply frame received for 5 I0104 12:03:07.225854 8 log.go:172] (0xc001fbe370) Data frame received for 3 I0104 12:03:07.225939 8 log.go:172] (0xc00001cfa0) (3) Data frame handling I0104 12:03:07.225964 8 log.go:172] (0xc00001cfa0) (3) Data frame sent I0104 12:03:07.364375 8 log.go:172] (0xc001fbe370) (0xc00001cfa0) Stream removed, broadcasting: 3 I0104 12:03:07.364543 8 log.go:172] (0xc001fbe370) Data frame received for 1 I0104 12:03:07.364571 8 log.go:172] (0xc001cbbe00) (1) Data frame handling I0104 12:03:07.364596 8 log.go:172] (0xc001cbbe00) (1) Data frame sent I0104 12:03:07.364625 8 log.go:172] (0xc001fbe370) (0xc001cbbe00) Stream removed, broadcasting: 1 I0104 12:03:07.364838 8 log.go:172] (0xc001fbe370) (0xc000bcca00) Stream removed, broadcasting: 5 I0104 12:03:07.364850 8 log.go:172] (0xc001fbe370) Go away received I0104 12:03:07.365361 8 log.go:172] (0xc001fbe370) (0xc001cbbe00) Stream removed, broadcasting: 1 I0104 12:03:07.365441 8 log.go:172] (0xc001fbe370) (0xc00001cfa0) Stream removed, broadcasting: 3 I0104 12:03:07.365451 8 log.go:172] (0xc001fbe370) (0xc000bcca00) Stream removed, broadcasting: 5 Jan 4 12:03:07.365: INFO: Exec stderr: "" Jan 4 12:03:07.365: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:07.365: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:07.501651 8 log.go:172] (0xc000b2a790) (0xc00001d220) Create stream I0104 12:03:07.501772 8 log.go:172] (0xc000b2a790) (0xc00001d220) Stream added, broadcasting: 1 I0104 12:03:07.524670 8 log.go:172] (0xc000b2a790) Reply frame received for 1 I0104 12:03:07.524761 8 log.go:172] (0xc000b2a790) (0xc0013bc000) Create stream I0104 12:03:07.524781 8 log.go:172] (0xc000b2a790) (0xc0013bc000) Stream added, broadcasting: 3 I0104 12:03:07.526641 8 log.go:172] (0xc000b2a790) Reply frame received for 3 I0104 12:03:07.526673 8 log.go:172] (0xc000b2a790) (0xc001804000) Create stream I0104 12:03:07.526684 8 log.go:172] (0xc000b2a790) (0xc001804000) Stream added, broadcasting: 5 I0104 12:03:07.528222 8 log.go:172] (0xc000b2a790) Reply frame received for 5 I0104 12:03:07.622352 8 log.go:172] (0xc000b2a790) Data frame received for 3 I0104 12:03:07.622404 8 log.go:172] (0xc0013bc000) (3) Data frame handling I0104 12:03:07.622425 8 log.go:172] (0xc0013bc000) (3) Data frame sent I0104 12:03:07.747741 8 log.go:172] (0xc000b2a790) Data frame received for 1 I0104 12:03:07.747826 8 log.go:172] (0xc00001d220) (1) Data frame handling I0104 12:03:07.747858 8 log.go:172] (0xc00001d220) (1) Data frame sent I0104 12:03:07.747887 8 log.go:172] (0xc000b2a790) (0xc00001d220) Stream removed, broadcasting: 1 I0104 12:03:07.748895 8 log.go:172] (0xc000b2a790) (0xc0013bc000) Stream removed, broadcasting: 3 I0104 12:03:07.748956 8 log.go:172] (0xc000b2a790) (0xc001804000) Stream removed, broadcasting: 5 I0104 12:03:07.749020 8 log.go:172] (0xc000b2a790) (0xc00001d220) Stream removed, broadcasting: 1 I0104 12:03:07.749030 8 log.go:172] (0xc000b2a790) (0xc0013bc000) Stream removed, broadcasting: 3 I0104 12:03:07.749038 8 log.go:172] (0xc000b2a790) (0xc001804000) Stream removed, broadcasting: 5 Jan 4 12:03:07.749: INFO: Exec stderr: "" Jan 4 12:03:07.749: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7m54v PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 12:03:07.749: INFO: >>> kubeConfig: /root/.kube/config I0104 12:03:07.750587 8 log.go:172] (0xc000b2a790) Go away received I0104 12:03:07.813693 8 log.go:172] (0xc000531b80) (0xc0018041e0) Create stream I0104 12:03:07.813816 8 log.go:172] (0xc000531b80) (0xc0018041e0) Stream added, broadcasting: 1 I0104 12:03:07.846251 8 log.go:172] (0xc000531b80) Reply frame received for 1 I0104 12:03:07.846354 8 log.go:172] (0xc000531b80) (0xc00038a500) Create stream I0104 12:03:07.846370 8 log.go:172] (0xc000531b80) (0xc00038a500) Stream added, broadcasting: 3 I0104 12:03:07.847975 8 log.go:172] (0xc000531b80) Reply frame received for 3 I0104 12:03:07.847998 8 log.go:172] (0xc000531b80) (0xc001804280) Create stream I0104 12:03:07.848008 8 log.go:172] (0xc000531b80) (0xc001804280) Stream added, broadcasting: 5 I0104 12:03:07.849452 8 log.go:172] (0xc000531b80) Reply frame received for 5 I0104 12:03:07.993767 8 log.go:172] (0xc000531b80) Data frame received for 3 I0104 12:03:07.993860 8 log.go:172] (0xc00038a500) (3) Data frame handling I0104 12:03:07.993905 8 log.go:172] (0xc00038a500) (3) Data frame sent I0104 12:03:08.139061 8 log.go:172] (0xc000531b80) Data frame received for 1 I0104 12:03:08.139149 8 log.go:172] (0xc0018041e0) (1) Data frame handling I0104 12:03:08.139183 8 log.go:172] (0xc0018041e0) (1) Data frame sent I0104 12:03:08.139211 8 log.go:172] (0xc000531b80) (0xc0018041e0) Stream removed, broadcasting: 1 I0104 12:03:08.140417 8 log.go:172] (0xc000531b80) (0xc00038a500) Stream removed, broadcasting: 3 I0104 12:03:08.140948 8 log.go:172] (0xc000531b80) (0xc001804280) Stream removed, broadcasting: 5 I0104 12:03:08.141030 8 log.go:172] (0xc000531b80) (0xc0018041e0) Stream removed, broadcasting: 1 I0104 12:03:08.141047 8 log.go:172] (0xc000531b80) (0xc00038a500) Stream removed, broadcasting: 3 I0104 12:03:08.141062 8 log.go:172] (0xc000531b80) (0xc001804280) Stream removed, broadcasting: 5 Jan 4 12:03:08.141: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:03:08.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0104 12:03:08.141747 8 log.go:172] (0xc000531b80) Go away received STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7m54v" for this suite. Jan 4 12:03:54.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:03:54.450: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7m54v, resource: bindings, ignored listing per whitelist Jan 4 12:03:54.541: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7m54v deletion completed in 46.38160706s • [SLOW TEST:89.577 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:03:54.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 12:03:54.882: INFO: Waiting up to 5m0s for pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-qrf8d" to be "success or failure" Jan 4 12:03:55.005: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 123.295299ms Jan 4 12:03:57.560: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.67817208s Jan 4 12:03:59.578: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.695594026s Jan 4 12:04:01.598: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715956177s Jan 4 12:04:04.101: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.219072307s Jan 4 12:04:06.117: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.234556008s Jan 4 12:04:08.138: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.255717025s STEP: Saw pod success Jan 4 12:04:08.138: INFO: Pod "pod-47b6bc32-2eea-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:04:08.144: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-47b6bc32-2eea-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 12:04:08.292: INFO: Waiting for pod pod-47b6bc32-2eea-11ea-9996-0242ac110006 to disappear Jan 4 12:04:08.314: INFO: Pod pod-47b6bc32-2eea-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:04:08.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qrf8d" for this suite. Jan 4 12:04:16.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:04:16.815: INFO: namespace: e2e-tests-emptydir-qrf8d, resource: bindings, ignored listing per whitelist Jan 4 12:04:16.869: INFO: namespace e2e-tests-emptydir-qrf8d deletion completed in 8.536589585s • [SLOW TEST:22.327 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:04:16.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 12:04:17.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 4 12:04:17.164: INFO: stderr: "" Jan 4 12:04:17.165: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:04:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j28m5" for this suite. Jan 4 12:04:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:04:23.307: INFO: namespace: e2e-tests-kubectl-j28m5, resource: bindings, ignored listing per whitelist Jan 4 12:04:23.402: INFO: namespace e2e-tests-kubectl-j28m5 deletion completed in 6.188111443s • [SLOW TEST:6.533 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:04:23.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:04:36.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-dwx7f" for this suite. Jan 4 12:05:00.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:05:00.776: INFO: namespace: e2e-tests-replication-controller-dwx7f, resource: bindings, ignored listing per whitelist Jan 4 12:05:00.854: INFO: namespace e2e-tests-replication-controller-dwx7f deletion completed in 24.135587974s • [SLOW TEST:37.452 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:05:00.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 4 12:05:23.505: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:05:23.520: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:05:25.520: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:05:25.648: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:05:27.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:05:28.111: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:05:29.520: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:05:29.573: INFO: Pod pod-with-poststart-http-hook still exists Jan 4 12:05:31.520: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 4 12:05:31.531: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:05:31.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-x86fs" for this suite. Jan 4 12:05:55.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:05:55.759: INFO: namespace: e2e-tests-container-lifecycle-hook-x86fs, resource: bindings, ignored listing per whitelist Jan 4 12:05:55.759: INFO: namespace e2e-tests-container-lifecycle-hook-x86fs deletion completed in 24.220064365s • [SLOW TEST:54.905 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:05:55.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 4 12:05:56.010: INFO: Waiting up to 5m0s for pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-vg94w" to be "success or failure" Jan 4 12:05:56.074: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 63.248368ms Jan 4 12:05:58.085: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074928603s Jan 4 12:06:00.107: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096975904s Jan 4 12:06:02.147: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136269574s Jan 4 12:06:04.169: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158352897s STEP: Saw pod success Jan 4 12:06:04.169: INFO: Pod "pod-8fe88c7a-2eea-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:06:04.181: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8fe88c7a-2eea-11ea-9996-0242ac110006 container test-container: STEP: delete the pod Jan 4 12:06:04.947: INFO: Waiting for pod pod-8fe88c7a-2eea-11ea-9996-0242ac110006 to disappear Jan 4 12:06:05.061: INFO: Pod pod-8fe88c7a-2eea-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:06:05.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vg94w" for this suite. Jan 4 12:06:11.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:06:11.174: INFO: namespace: e2e-tests-emptydir-vg94w, resource: bindings, ignored listing per whitelist Jan 4 12:06:11.246: INFO: namespace e2e-tests-emptydir-vg94w deletion completed in 6.161094598s • [SLOW TEST:15.486 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:06:11.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s6wsx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s6wsx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s6wsx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.133.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.133.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.133.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.133.149_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s6wsx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s6wsx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s6wsx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.133.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.133.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.133.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.133.149_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 4 12:06:27.843: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.850: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.855: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.860: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.865: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.869: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.873: INFO: Unable to read 10.100.133.149_udp@PTR from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.881: INFO: Unable to read 10.100.133.149_tcp@PTR from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.887: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.893: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.900: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s6wsx from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.905: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.911: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.917: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.922: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.937: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.942: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.947: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.954: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.960: INFO: Unable to read 10.100.133.149_udp@PTR from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.967: INFO: Unable to read 10.100.133.149_tcp@PTR from pod e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-99408da2-2eea-11ea-9996-0242ac110006) Jan 4 12:06:27.967: INFO: Lookups using e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.133.149_udp@PTR 10.100.133.149_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s6wsx jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx jessie_udp@dns-test-service.e2e-tests-dns-s6wsx.svc jessie_tcp@dns-test-service.e2e-tests-dns-s6wsx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s6wsx.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s6wsx.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.133.149_udp@PTR 10.100.133.149_tcp@PTR] Jan 4 12:06:33.585: INFO: DNS probes using e2e-tests-dns-s6wsx/dns-test-99408da2-2eea-11ea-9996-0242ac110006 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:06:34.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-s6wsx" for this suite. Jan 4 12:06:42.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:06:42.451: INFO: namespace: e2e-tests-dns-s6wsx, resource: bindings, ignored listing per whitelist Jan 4 12:06:42.658: INFO: namespace e2e-tests-dns-s6wsx deletion completed in 8.458469712s • [SLOW TEST:31.412 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:06:42.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 12:06:43.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:06:44.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-2w25x" for this suite. Jan 4 12:06:50.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:06:50.448: INFO: namespace: e2e-tests-custom-resource-definition-2w25x, resource: bindings, ignored listing per whitelist Jan 4 12:06:50.539: INFO: namespace e2e-tests-custom-resource-definition-2w25x deletion completed in 6.259498794s • [SLOW TEST:7.880 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:06:50.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:08:05.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-x4hph" for this suite. Jan 4 12:08:11.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:08:11.560: INFO: namespace: e2e-tests-container-runtime-x4hph, resource: bindings, ignored listing per whitelist Jan 4 12:08:11.566: INFO: namespace e2e-tests-container-runtime-x4hph deletion completed in 6.264547577s • [SLOW TEST:81.026 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:08:11.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 4 12:08:12.107: INFO: Waiting up to 5m0s for pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006" in namespace "e2e-tests-var-expansion-lcp95" to be "success or failure" Jan 4 12:08:12.147: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 39.824968ms Jan 4 12:08:14.521: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413833743s Jan 4 12:08:16.563: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455476146s Jan 4 12:08:19.009: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.901196505s Jan 4 12:08:21.017: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909664111s Jan 4 12:08:23.035: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.927789444s Jan 4 12:08:25.056: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.948460337s STEP: Saw pod success Jan 4 12:08:25.056: INFO: Pod "var-expansion-e106859c-2eea-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:08:25.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e106859c-2eea-11ea-9996-0242ac110006 container dapi-container: STEP: delete the pod Jan 4 12:08:25.294: INFO: Waiting for pod var-expansion-e106859c-2eea-11ea-9996-0242ac110006 to disappear Jan 4 12:08:25.302: INFO: Pod var-expansion-e106859c-2eea-11ea-9996-0242ac110006 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:08:25.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-lcp95" for this suite. Jan 4 12:08:31.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:08:31.374: INFO: namespace: e2e-tests-var-expansion-lcp95, resource: bindings, ignored listing per whitelist Jan 4 12:08:31.573: INFO: namespace e2e-tests-var-expansion-lcp95 deletion completed in 6.263102735s • [SLOW TEST:20.008 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:08:31.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 4 12:08:32.354: INFO: created pod pod-service-account-defaultsa Jan 4 12:08:32.354: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 4 12:08:32.384: INFO: created pod pod-service-account-mountsa Jan 4 12:08:32.385: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 4 12:08:32.433: INFO: created pod pod-service-account-nomountsa Jan 4 12:08:32.433: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 4 12:08:32.458: INFO: created pod pod-service-account-defaultsa-mountspec Jan 4 12:08:32.458: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 4 12:08:32.599: INFO: created pod pod-service-account-mountsa-mountspec Jan 4 12:08:32.599: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 4 12:08:32.819: INFO: created pod pod-service-account-nomountsa-mountspec Jan 4 12:08:32.819: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 4 12:08:32.893: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 4 12:08:32.893: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 4 12:08:33.070: INFO: created pod pod-service-account-mountsa-nomountspec Jan 4 12:08:33.070: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 4 12:08:33.138: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 4 12:08:33.138: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:08:33.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4sxrb" for this suite. Jan 4 12:09:07.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:09:07.258: INFO: namespace: e2e-tests-svcaccounts-4sxrb, resource: bindings, ignored listing per whitelist Jan 4 12:09:07.343: INFO: namespace e2e-tests-svcaccounts-4sxrb deletion completed in 31.930653428s • [SLOW TEST:35.769 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:09:07.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-020e8be8-2eeb-11ea-9996-0242ac110006 STEP: Creating secret with name secret-projected-all-test-volume-020e8bbd-2eeb-11ea-9996-0242ac110006 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 4 12:09:07.542: INFO: Waiting up to 5m0s for pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-vjwm5" to be "success or failure" Jan 4 12:09:07.564: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.164976ms Jan 4 12:09:10.081: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53941421s Jan 4 12:09:12.095: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552918498s Jan 4 12:09:14.143: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.60076709s Jan 4 12:09:16.180: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638291469s Jan 4 12:09:18.197: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65501071s STEP: Saw pod success Jan 4 12:09:18.197: INFO: Pod "projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:09:18.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006 container projected-all-volume-test: STEP: delete the pod Jan 4 12:09:18.408: INFO: Waiting for pod projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006 to disappear Jan 4 12:09:18.415: INFO: Pod projected-volume-020e8ab9-2eeb-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:09:18.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vjwm5" for this suite. Jan 4 12:09:26.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:09:26.743: INFO: namespace: e2e-tests-projected-vjwm5, resource: bindings, ignored listing per whitelist Jan 4 12:09:26.845: INFO: namespace e2e-tests-projected-vjwm5 deletion completed in 8.420454177s • [SLOW TEST:19.502 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:09:26.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 12:09:55.242: INFO: Container started at 2020-01-04 12:09:37 +0000 UTC, pod became ready at 2020-01-04 12:09:54 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:09:55.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vqpp9" for this suite. Jan 4 12:10:21.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:10:21.566: INFO: namespace: e2e-tests-container-probe-vqpp9, resource: bindings, ignored listing per whitelist Jan 4 12:10:21.575: INFO: namespace e2e-tests-container-probe-vqpp9 deletion completed in 26.327427882s • [SLOW TEST:54.729 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:10:21.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 4 12:10:22.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-xjkn6" to be "success or failure" Jan 4 12:10:22.125: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 39.649085ms Jan 4 12:10:24.332: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246768492s Jan 4 12:10:26.363: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278445248s Jan 4 12:10:32.205: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.120434915s Jan 4 12:10:35.293: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.207614328s Jan 4 12:10:37.317: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.232364858s Jan 4 12:10:39.527: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.44202687s Jan 4 12:10:42.643: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.558205637s STEP: Saw pod success Jan 4 12:10:42.643: INFO: Pod "downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:10:42.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006 container client-container: STEP: delete the pod Jan 4 12:10:43.593: INFO: Waiting for pod downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006 to disappear Jan 4 12:10:43.614: INFO: Pod downwardapi-volume-2e595771-2eeb-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:10:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xjkn6" for this suite. Jan 4 12:10:51.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:10:52.011: INFO: namespace: e2e-tests-projected-xjkn6, resource: bindings, ignored listing per whitelist Jan 4 12:10:52.653: INFO: namespace e2e-tests-projected-xjkn6 deletion completed in 9.027383976s • [SLOW TEST:31.078 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:10:52.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 4 12:10:53.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-7v4j2" to be "success or failure" Jan 4 12:10:53.324: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 144.328672ms Jan 4 12:10:55.406: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22614235s Jan 4 12:10:57.422: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242144924s Jan 4 12:10:59.455: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275689813s Jan 4 12:11:02.011: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831827578s Jan 4 12:11:04.262: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.082732295s Jan 4 12:11:06.527: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347891951s Jan 4 12:11:08.554: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.374271576s STEP: Saw pod success Jan 4 12:11:08.554: INFO: Pod "downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:11:08.561: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006 container client-container: STEP: delete the pod Jan 4 12:11:09.515: INFO: Waiting for pod downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006 to disappear Jan 4 12:11:09.609: INFO: Pod downwardapi-volume-40f17676-2eeb-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:11:09.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7v4j2" for this suite. Jan 4 12:11:15.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:11:15.863: INFO: namespace: e2e-tests-projected-7v4j2, resource: bindings, ignored listing per whitelist Jan 4 12:11:15.884: INFO: namespace e2e-tests-projected-7v4j2 deletion completed in 6.262151073s • [SLOW TEST:23.229 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:11:15.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 12:11:16.317: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 4 12:11:21.947: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 12:11:25.971: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 4 12:11:28.002: INFO: Creating deployment "test-rollover-deployment" Jan 4 12:11:28.094: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 4 12:11:30.285: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 4 12:11:30.906: INFO: Ensure that both replica sets have 1 created replica Jan 4 12:11:30.937: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 4 12:11:30.978: INFO: Updating deployment test-rollover-deployment Jan 4 12:11:30.978: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 4 12:11:33.078: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 4 12:11:33.119: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 4 12:11:33.156: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:33.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:35.410: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:35.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:37.183: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:37.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:41.086: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:41.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:41.504: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:41.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:43.415: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:43.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:45.261: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:45.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736691, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:47.179: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:47.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736705, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:49.180: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:49.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736705, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:51.183: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:51.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736705, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:53.185: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:53.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736705, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:55.187: INFO: all replica sets need to contain the pod-template-hash label Jan 4 12:11:55.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736705, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713736688, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 4 12:11:57.200: INFO: Jan 4 12:11:57.200: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 4 12:11:57.220: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-zz65x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zz65x/deployments/test-rollover-deployment,UID:55d04841-2eeb-11ea-a994-fa163e34d433,ResourceVersion:17136258,Generation:2,CreationTimestamp:2020-01-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 12:11:28 +0000 UTC 2020-01-04 12:11:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 12:11:55 +0000 UTC 2020-01-04 12:11:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 4 12:11:57.230: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-zz65x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zz65x/replicasets/test-rollover-deployment-5b8479fdb6,UID:5797e764-2eeb-11ea-a994-fa163e34d433,ResourceVersion:17136248,Generation:2,CreationTimestamp:2020-01-04 12:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 55d04841-2eeb-11ea-a994-fa163e34d433 0xc000b2f697 0xc000b2f698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 4 12:11:57.230: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 4 12:11:57.230: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-zz65x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zz65x/replicasets/test-rollover-controller,UID:4ec7abf5-2eeb-11ea-a994-fa163e34d433,ResourceVersion:17136257,Generation:2,CreationTimestamp:2020-01-04 12:11:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 55d04841-2eeb-11ea-a994-fa163e34d433 0xc000b2f507 0xc000b2f508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 12:11:57.231: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-zz65x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zz65x/replicasets/test-rollover-deployment-58494b7559,UID:55e1ddad-2eeb-11ea-a994-fa163e34d433,ResourceVersion:17136211,Generation:2,CreationTimestamp:2020-01-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 55d04841-2eeb-11ea-a994-fa163e34d433 0xc000b2f5c7 0xc000b2f5c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 12:11:57.242: INFO: Pod "test-rollover-deployment-5b8479fdb6-rc2ns" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-rc2ns,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-zz65x,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zz65x/pods/test-rollover-deployment-5b8479fdb6-rc2ns,UID:57c220be-2eeb-11ea-a994-fa163e34d433,ResourceVersion:17136234,Generation:0,CreationTimestamp:2020-01-04 12:11:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5797e764-2eeb-11ea-a994-fa163e34d433 0xc0012ac537 0xc0012ac538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8xhxl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8xhxl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8xhxl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012ac6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012ac6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:11:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:11:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:11:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:11:31 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-04 12:11:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 12:11:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2ec7c2d03b2e6f04f4a928990a7226bfdcf5e9445a7c0fba4a400c3543be2269}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:11:57.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zz65x" for this suite. Jan 4 12:12:07.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:12:07.558: INFO: namespace: e2e-tests-deployment-zz65x, resource: bindings, ignored listing per whitelist Jan 4 12:12:07.617: INFO: namespace e2e-tests-deployment-zz65x deletion completed in 10.368970093s • [SLOW TEST:51.732 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:12:07.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-6da27da0-2eeb-11ea-9996-0242ac110006 STEP: Creating configMap with name cm-test-opt-upd-6da27f42-2eeb-11ea-9996-0242ac110006 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6da27da0-2eeb-11ea-9996-0242ac110006 STEP: Updating configmap cm-test-opt-upd-6da27f42-2eeb-11ea-9996-0242ac110006 STEP: Creating configMap with name cm-test-opt-create-6da27f61-2eeb-11ea-9996-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:13:45.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nsc5x" for this suite. Jan 4 12:14:09.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:14:09.460: INFO: namespace: e2e-tests-configmap-nsc5x, resource: bindings, ignored listing per whitelist Jan 4 12:14:09.556: INFO: namespace e2e-tests-configmap-nsc5x deletion completed in 24.191114861s • [SLOW TEST:121.939 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:14:09.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 4 12:14:09.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-zkbrm run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 4 12:14:25.761: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0104 12:14:23.904203 193 log.go:172] (0xc0006fc0b0) (0xc000aa6140) Create stream\nI0104 12:14:23.904272 193 log.go:172] (0xc0006fc0b0) (0xc000aa6140) Stream added, broadcasting: 1\nI0104 12:14:23.968143 193 log.go:172] (0xc0006fc0b0) Reply frame received for 1\nI0104 12:14:23.968310 193 log.go:172] (0xc0006fc0b0) (0xc0006b03c0) Create stream\nI0104 12:14:23.968334 193 log.go:172] (0xc0006fc0b0) (0xc0006b03c0) Stream added, broadcasting: 3\nI0104 12:14:23.972254 193 log.go:172] (0xc0006fc0b0) Reply frame received for 3\nI0104 12:14:23.972388 193 log.go:172] (0xc0006fc0b0) (0xc000aa6000) Create stream\nI0104 12:14:23.972411 193 log.go:172] (0xc0006fc0b0) (0xc000aa6000) Stream added, broadcasting: 5\nI0104 12:14:23.974266 193 log.go:172] (0xc0006fc0b0) Reply frame received for 5\nI0104 12:14:23.974328 193 log.go:172] (0xc0006fc0b0) (0xc000aa60a0) Create stream\nI0104 12:14:23.974345 193 log.go:172] (0xc0006fc0b0) (0xc000aa60a0) Stream added, broadcasting: 7\nI0104 12:14:23.976124 193 log.go:172] (0xc0006fc0b0) Reply frame received for 7\nI0104 12:14:23.976643 193 log.go:172] (0xc0006b03c0) (3) Writing data frame\nI0104 12:14:23.977020 193 log.go:172] (0xc0006b03c0) (3) Writing data frame\nI0104 12:14:24.020626 193 log.go:172] (0xc0006fc0b0) Data frame received for 5\nI0104 12:14:24.020683 193 log.go:172] (0xc000aa6000) (5) Data frame handling\nI0104 12:14:24.020711 193 log.go:172] (0xc000aa6000) (5) Data frame sent\nI0104 12:14:24.020717 193 log.go:172] (0xc0006fc0b0) Data frame received for 5\nI0104 12:14:24.020728 193 log.go:172] (0xc000aa6000) (5) Data frame handling\nI0104 12:14:24.020770 193 log.go:172] (0xc000aa6000) (5) Data frame sent\nI0104 12:14:25.722825 193 log.go:172] (0xc0006fc0b0) (0xc0006b03c0) Stream removed, broadcasting: 3\nI0104 12:14:25.722998 193 log.go:172] (0xc0006fc0b0) Data frame received for 1\nI0104 12:14:25.723021 193 log.go:172] (0xc000aa6140) (1) Data frame handling\nI0104 12:14:25.723042 193 log.go:172] (0xc000aa6140) (1) Data frame sent\nI0104 12:14:25.723058 193 log.go:172] (0xc0006fc0b0) (0xc000aa6140) Stream removed, broadcasting: 1\nI0104 12:14:25.723211 193 log.go:172] (0xc0006fc0b0) (0xc000aa60a0) Stream removed, broadcasting: 7\nI0104 12:14:25.723265 193 log.go:172] (0xc0006fc0b0) (0xc000aa6000) Stream removed, broadcasting: 5\nI0104 12:14:25.723299 193 log.go:172] (0xc0006fc0b0) Go away received\nI0104 12:14:25.723479 193 log.go:172] (0xc0006fc0b0) (0xc000aa6140) Stream removed, broadcasting: 1\nI0104 12:14:25.723604 193 log.go:172] (0xc0006fc0b0) (0xc0006b03c0) Stream removed, broadcasting: 3\nI0104 12:14:25.723626 193 log.go:172] (0xc0006fc0b0) (0xc000aa6000) Stream removed, broadcasting: 5\nI0104 12:14:25.723645 193 log.go:172] (0xc0006fc0b0) (0xc000aa60a0) Stream removed, broadcasting: 7\n" Jan 4 12:14:25.761: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:14:27.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zkbrm" for this suite. Jan 4 12:14:33.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:14:33.918: INFO: namespace: e2e-tests-kubectl-zkbrm, resource: bindings, ignored listing per whitelist Jan 4 12:14:34.063: INFO: namespace e2e-tests-kubectl-zkbrm deletion completed in 6.266212252s • [SLOW TEST:24.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:14:34.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 4 12:15:08.450: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:08.468: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:10.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:10.502: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:12.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:12.500: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:14.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:14.505: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:16.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:16.552: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:18.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:18.495: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:20.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:20.501: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:22.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:22.493: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:24.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:24.531: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:26.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:26.505: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:28.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:28.502: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:30.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:30.719: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:32.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:32.514: INFO: Pod pod-with-prestop-exec-hook still exists Jan 4 12:15:34.469: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 4 12:15:34.498: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:15:34.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5k2j9" for this suite. Jan 4 12:16:00.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:16:00.694: INFO: namespace: e2e-tests-container-lifecycle-hook-5k2j9, resource: bindings, ignored listing per whitelist Jan 4 12:16:00.796: INFO: namespace e2e-tests-container-lifecycle-hook-5k2j9 deletion completed in 26.229156364s • [SLOW TEST:86.732 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:16:00.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:16:17.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hxrqk" for this suite. Jan 4 12:17:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:17:03.521: INFO: namespace: e2e-tests-kubelet-test-hxrqk, resource: bindings, ignored listing per whitelist Jan 4 12:17:03.535: INFO: namespace e2e-tests-kubelet-test-hxrqk deletion completed in 46.283548091s • [SLOW TEST:62.739 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:17:03.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 4 12:17:03.876: INFO: Waiting up to 5m0s for pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-k4hfj" to be "success or failure" Jan 4 12:17:03.896: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.594303ms Jan 4 12:17:05.928: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051770747s Jan 4 12:17:07.947: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069932971s Jan 4 12:17:10.165: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288538057s Jan 4 12:17:12.210: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 8.333795399s Jan 4 12:17:14.228: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.351601173s STEP: Saw pod success Jan 4 12:17:14.228: INFO: Pod "downward-api-1df2e298-2eec-11ea-9996-0242ac110006" satisfied condition "success or failure" Jan 4 12:17:14.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1df2e298-2eec-11ea-9996-0242ac110006 container dapi-container: STEP: delete the pod Jan 4 12:17:14.423: INFO: Waiting for pod downward-api-1df2e298-2eec-11ea-9996-0242ac110006 to disappear Jan 4 12:17:14.472: INFO: Pod downward-api-1df2e298-2eec-11ea-9996-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:17:14.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k4hfj" for this suite. Jan 4 12:17:21.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:17:21.536: INFO: namespace: e2e-tests-downward-api-k4hfj, resource: bindings, ignored listing per whitelist Jan 4 12:17:21.561: INFO: namespace e2e-tests-downward-api-k4hfj deletion completed in 7.077891112s • [SLOW TEST:18.025 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:17:21.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mxmkm Jan 4 12:17:32.932: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mxmkm STEP: checking the pod's current state and verifying that restartCount is present Jan 4 12:17:32.937: INFO: Initial restart count of pod liveness-http is 0 Jan 4 12:17:51.460: INFO: Restart count of pod e2e-tests-container-probe-mxmkm/liveness-http is now 1 (18.522155684s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:17:51.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mxmkm" for this suite. Jan 4 12:17:57.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:17:57.870: INFO: namespace: e2e-tests-container-probe-mxmkm, resource: bindings, ignored listing per whitelist Jan 4 12:17:57.925: INFO: namespace e2e-tests-container-probe-mxmkm deletion completed in 6.308420794s • [SLOW TEST:36.363 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:17:57.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 4 12:21:03.557: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:03.632: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:05.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:05.703: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:07.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:07.672: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:09.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:09.660: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:11.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:11.652: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:13.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:13.658: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:15.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:15.667: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:17.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:17.652: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:19.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:19.651: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:21.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:22.263: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:23.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:23.663: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:25.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:25.652: INFO: Pod pod-with-poststart-exec-hook still exists Jan 4 12:21:27.633: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 4 12:21:27.648: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 4 12:21:27.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8xhhn" for this suite. Jan 4 12:21:51.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 12:21:51.723: INFO: namespace: e2e-tests-container-lifecycle-hook-8xhhn, resource: bindings, ignored listing per whitelist Jan 4 12:21:51.918: INFO: namespace e2e-tests-container-lifecycle-hook-8xhhn deletion completed in 24.26397203s • [SLOW TEST:233.992 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 4 12:21:51.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 4 12:21:52.328: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 25.878743ms)
Jan  4 12:21:52.338: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.662895ms)
Jan  4 12:21:52.349: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.253624ms)
Jan  4 12:21:52.358: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.812848ms)
Jan  4 12:21:52.419: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.979661ms)
Jan  4 12:21:52.439: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.826268ms)
Jan  4 12:21:52.446: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.836338ms)
Jan  4 12:21:52.454: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.687254ms)
Jan  4 12:21:52.480: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.856247ms)
Jan  4 12:21:52.492: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.889466ms)
Jan  4 12:21:52.504: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.9643ms)
Jan  4 12:21:52.519: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.012607ms)
Jan  4 12:21:52.533: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.281835ms)
Jan  4 12:21:52.544: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.647352ms)
Jan  4 12:21:52.553: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.908772ms)
Jan  4 12:21:52.562: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.597549ms)
Jan  4 12:21:52.569: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.210797ms)
Jan  4 12:21:52.574: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.709586ms)
Jan  4 12:21:52.578: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.501236ms)
Jan  4 12:21:52.590: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.0045ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:21:52.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-bdl4w" for this suite.
Jan  4 12:21:58.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:21:58.883: INFO: namespace: e2e-tests-proxy-bdl4w, resource: bindings, ignored listing per whitelist
Jan  4 12:21:58.971: INFO: namespace e2e-tests-proxy-bdl4w deletion completed in 6.376188183s

• [SLOW TEST:7.052 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:21:58.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:22:06.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-vhdcn" for this suite.
Jan  4 12:22:12.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:22:12.268: INFO: namespace: e2e-tests-namespaces-vhdcn, resource: bindings, ignored listing per whitelist
Jan  4 12:22:12.375: INFO: namespace e2e-tests-namespaces-vhdcn deletion completed in 6.30250527s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rbhfp" for this suite.
Jan  4 12:22:12.378: INFO: Namespace e2e-tests-nsdeletetest-rbhfp was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-bxwrr" for this suite.
Jan  4 12:22:18.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:22:18.716: INFO: namespace: e2e-tests-nsdeletetest-bxwrr, resource: bindings, ignored listing per whitelist
Jan  4 12:22:18.791: INFO: namespace e2e-tests-nsdeletetest-bxwrr deletion completed in 6.412440997s

• [SLOW TEST:19.820 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:22:18.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d9ec8634-2eec-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:22:19.203: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-w69sr" to be "success or failure"
Jan  4 12:22:19.233: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 29.131782ms
Jan  4 12:22:21.248: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044236868s
Jan  4 12:22:23.268: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064216584s
Jan  4 12:22:25.367: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163684119s
Jan  4 12:22:28.140: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936423765s
Jan  4 12:22:30.187: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.983370367s
STEP: Saw pod success
Jan  4 12:22:30.187: INFO: Pod "pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:22:30.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 12:22:30.636: INFO: Waiting for pod pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006 to disappear
Jan  4 12:22:30.747: INFO: Pod pod-projected-configmaps-d9ede37d-2eec-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:22:30.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w69sr" for this suite.
Jan  4 12:22:40.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:22:41.311: INFO: namespace: e2e-tests-projected-w69sr, resource: bindings, ignored listing per whitelist
Jan  4 12:22:41.416: INFO: namespace e2e-tests-projected-w69sr deletion completed in 10.658364173s

• [SLOW TEST:22.624 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:22:41.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 12:22:41.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jpzcr'
Jan  4 12:22:41.926: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 12:22:41.926: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  4 12:22:41.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-jpzcr'
Jan  4 12:22:42.105: INFO: stderr: ""
Jan  4 12:22:42.105: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:22:42.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jpzcr" for this suite.
Jan  4 12:23:08.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:23:08.417: INFO: namespace: e2e-tests-kubectl-jpzcr, resource: bindings, ignored listing per whitelist
Jan  4 12:23:08.571: INFO: namespace e2e-tests-kubectl-jpzcr deletion completed in 26.435401926s

• [SLOW TEST:27.155 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:23:08.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  4 12:23:08.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:09.302: INFO: stderr: ""
Jan  4 12:23:09.302: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 12:23:09.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:09.474: INFO: stderr: ""
Jan  4 12:23:09.474: INFO: stdout: "update-demo-nautilus-6gqhj update-demo-nautilus-cc5s7 "
Jan  4 12:23:09.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gqhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:09.725: INFO: stderr: ""
Jan  4 12:23:09.725: INFO: stdout: ""
Jan  4 12:23:09.725: INFO: update-demo-nautilus-6gqhj is created but not running
Jan  4 12:23:14.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:14.855: INFO: stderr: ""
Jan  4 12:23:14.855: INFO: stdout: "update-demo-nautilus-6gqhj update-demo-nautilus-cc5s7 "
Jan  4 12:23:14.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gqhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:14.967: INFO: stderr: ""
Jan  4 12:23:14.968: INFO: stdout: ""
Jan  4 12:23:14.968: INFO: update-demo-nautilus-6gqhj is created but not running
Jan  4 12:23:19.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:20.137: INFO: stderr: ""
Jan  4 12:23:20.137: INFO: stdout: "update-demo-nautilus-6gqhj update-demo-nautilus-cc5s7 "
Jan  4 12:23:20.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gqhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:20.270: INFO: stderr: ""
Jan  4 12:23:20.270: INFO: stdout: ""
Jan  4 12:23:20.270: INFO: update-demo-nautilus-6gqhj is created but not running
Jan  4 12:23:25.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:25.497: INFO: stderr: ""
Jan  4 12:23:25.497: INFO: stdout: "update-demo-nautilus-6gqhj update-demo-nautilus-cc5s7 "
Jan  4 12:23:25.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gqhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:25.615: INFO: stderr: ""
Jan  4 12:23:25.615: INFO: stdout: "true"
Jan  4 12:23:25.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6gqhj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:25.740: INFO: stderr: ""
Jan  4 12:23:25.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 12:23:25.740: INFO: validating pod update-demo-nautilus-6gqhj
Jan  4 12:23:25.773: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 12:23:25.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 12:23:25.774: INFO: update-demo-nautilus-6gqhj is verified up and running
Jan  4 12:23:25.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc5s7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:25.893: INFO: stderr: ""
Jan  4 12:23:25.893: INFO: stdout: "true"
Jan  4 12:23:25.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc5s7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:26.011: INFO: stderr: ""
Jan  4 12:23:26.011: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 12:23:26.011: INFO: validating pod update-demo-nautilus-cc5s7
Jan  4 12:23:26.026: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 12:23:26.026: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 12:23:26.026: INFO: update-demo-nautilus-cc5s7 is verified up and running
STEP: using delete to clean up resources
Jan  4 12:23:26.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:26.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:23:26.128: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  4 12:23:26.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4k4f9'
Jan  4 12:23:26.315: INFO: stderr: "No resources found.\n"
Jan  4 12:23:26.315: INFO: stdout: ""
Jan  4 12:23:26.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4k4f9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 12:23:26.624: INFO: stderr: ""
Jan  4 12:23:26.624: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:23:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4k4f9" for this suite.
Jan  4 12:23:50.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:23:51.072: INFO: namespace: e2e-tests-kubectl-4k4f9, resource: bindings, ignored listing per whitelist
Jan  4 12:23:51.172: INFO: namespace e2e-tests-kubectl-4k4f9 deletion completed in 24.47919971s

• [SLOW TEST:42.601 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:23:51.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  4 12:23:51.411: INFO: Waiting up to 5m0s for pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-tb995" to be "success or failure"
Jan  4 12:23:51.431: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.000486ms
Jan  4 12:23:53.468: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057704587s
Jan  4 12:23:55.495: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084278452s
Jan  4 12:23:57.892: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481041295s
Jan  4 12:23:59.914: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5031391s
Jan  4 12:24:01.968: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556870393s
STEP: Saw pod success
Jan  4 12:24:01.968: INFO: Pod "downward-api-10e6c99b-2eed-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:24:01.985: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-10e6c99b-2eed-11ea-9996-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan  4 12:24:03.416: INFO: Waiting for pod downward-api-10e6c99b-2eed-11ea-9996-0242ac110006 to disappear
Jan  4 12:24:03.436: INFO: Pod downward-api-10e6c99b-2eed-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:24:03.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tb995" for this suite.
Jan  4 12:24:09.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:24:09.622: INFO: namespace: e2e-tests-downward-api-tb995, resource: bindings, ignored listing per whitelist
Jan  4 12:24:09.757: INFO: namespace e2e-tests-downward-api-tb995 deletion completed in 6.309927055s

• [SLOW TEST:18.584 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:24:09.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-h5cl
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 12:24:10.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-h5cl" in namespace "e2e-tests-subpath-mkg78" to be "success or failure"
Jan  4 12:24:10.120: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 85.897396ms
Jan  4 12:24:12.135: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100237175s
Jan  4 12:24:14.154: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119978623s
Jan  4 12:24:16.169: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134391211s
Jan  4 12:24:18.202: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167646878s
Jan  4 12:24:20.223: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188525787s
Jan  4 12:24:22.241: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.206732966s
Jan  4 12:24:24.286: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.251589577s
Jan  4 12:24:26.335: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 16.300276587s
Jan  4 12:24:28.673: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 18.638505987s
Jan  4 12:24:30.700: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 20.66604555s
Jan  4 12:24:32.710: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 22.675889067s
Jan  4 12:24:34.739: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 24.704188812s
Jan  4 12:24:36.753: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 26.718848382s
Jan  4 12:24:38.768: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 28.733513042s
Jan  4 12:24:40.800: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 30.765476791s
Jan  4 12:24:42.926: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 32.891404402s
Jan  4 12:24:44.948: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Running", Reason="", readiness=false. Elapsed: 34.913796902s
Jan  4 12:24:46.961: INFO: Pod "pod-subpath-test-projected-h5cl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.926615919s
STEP: Saw pod success
Jan  4 12:24:46.961: INFO: Pod "pod-subpath-test-projected-h5cl" satisfied condition "success or failure"
Jan  4 12:24:46.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-h5cl container test-container-subpath-projected-h5cl: 
STEP: delete the pod
Jan  4 12:24:47.085: INFO: Waiting for pod pod-subpath-test-projected-h5cl to disappear
Jan  4 12:24:47.102: INFO: Pod pod-subpath-test-projected-h5cl no longer exists
STEP: Deleting pod pod-subpath-test-projected-h5cl
Jan  4 12:24:47.102: INFO: Deleting pod "pod-subpath-test-projected-h5cl" in namespace "e2e-tests-subpath-mkg78"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:24:47.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mkg78" for this suite.
Jan  4 12:24:55.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:24:55.388: INFO: namespace: e2e-tests-subpath-mkg78, resource: bindings, ignored listing per whitelist
Jan  4 12:24:55.422: INFO: namespace e2e-tests-subpath-mkg78 deletion completed in 8.309818165s

• [SLOW TEST:45.664 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:24:55.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  4 12:24:55.732: INFO: Waiting up to 5m0s for pod "pod-373d3987-2eed-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-bkknf" to be "success or failure"
Jan  4 12:24:55.861: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 129.048996ms
Jan  4 12:24:58.358: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625927145s
Jan  4 12:25:00.375: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.643304066s
Jan  4 12:25:02.391: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658537507s
Jan  4 12:25:04.435: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.70317738s
Jan  4 12:25:06.823: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.090695853s
Jan  4 12:25:08.836: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.104325279s
STEP: Saw pod success
Jan  4 12:25:08.837: INFO: Pod "pod-373d3987-2eed-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:25:08.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-373d3987-2eed-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 12:25:09.927: INFO: Waiting for pod pod-373d3987-2eed-11ea-9996-0242ac110006 to disappear
Jan  4 12:25:10.334: INFO: Pod pod-373d3987-2eed-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:25:10.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bkknf" for this suite.
Jan  4 12:25:16.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:25:16.927: INFO: namespace: e2e-tests-emptydir-bkknf, resource: bindings, ignored listing per whitelist
Jan  4 12:25:17.137: INFO: namespace e2e-tests-emptydir-bkknf deletion completed in 6.776462176s

• [SLOW TEST:21.714 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:25:17.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  4 12:25:17.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jlbh5'
Jan  4 12:25:19.728: INFO: stderr: ""
Jan  4 12:25:19.728: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  4 12:25:21.790: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:21.790: INFO: Found 0 / 1
Jan  4 12:25:22.866: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:22.867: INFO: Found 0 / 1
Jan  4 12:25:23.752: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:23.752: INFO: Found 0 / 1
Jan  4 12:25:24.788: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:24.788: INFO: Found 0 / 1
Jan  4 12:25:26.106: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:26.106: INFO: Found 0 / 1
Jan  4 12:25:26.745: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:26.745: INFO: Found 0 / 1
Jan  4 12:25:27.843: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:27.844: INFO: Found 0 / 1
Jan  4 12:25:28.749: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:28.749: INFO: Found 0 / 1
Jan  4 12:25:29.750: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:29.750: INFO: Found 0 / 1
Jan  4 12:25:30.746: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:30.746: INFO: Found 1 / 1
Jan  4 12:25:30.746: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  4 12:25:30.754: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 12:25:30.754: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  4 12:25:30.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5'
Jan  4 12:25:30.968: INFO: stderr: ""
Jan  4 12:25:30.968: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Jan 12:25:28.646 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 12:25:28.646 # Server started, Redis version 3.2.12\n1:M 04 Jan 12:25:28.646 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 12:25:28.646 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  4 12:25:30.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5 --tail=1'
Jan  4 12:25:31.146: INFO: stderr: ""
Jan  4 12:25:31.146: INFO: stdout: "1:M 04 Jan 12:25:28.646 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  4 12:25:31.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5 --limit-bytes=1'
Jan  4 12:25:31.315: INFO: stderr: ""
Jan  4 12:25:31.315: INFO: stdout: " "
STEP: exposing timestamps
Jan  4 12:25:31.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5 --tail=1 --timestamps'
Jan  4 12:25:31.460: INFO: stderr: ""
Jan  4 12:25:31.460: INFO: stdout: "2020-01-04T12:25:28.648155022Z 1:M 04 Jan 12:25:28.646 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  4 12:25:33.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5 --since=1s'
Jan  4 12:25:34.205: INFO: stderr: ""
Jan  4 12:25:34.206: INFO: stdout: ""
Jan  4 12:25:34.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-p9pm4 redis-master --namespace=e2e-tests-kubectl-jlbh5 --since=24h'
Jan  4 12:25:34.334: INFO: stderr: ""
Jan  4 12:25:34.334: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Jan 12:25:28.646 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 12:25:28.646 # Server started, Redis version 3.2.12\n1:M 04 Jan 12:25:28.646 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 12:25:28.646 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  4 12:25:34.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jlbh5'
Jan  4 12:25:34.425: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:25:34.425: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  4 12:25:34.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-jlbh5'
Jan  4 12:25:34.555: INFO: stderr: "No resources found.\n"
Jan  4 12:25:34.556: INFO: stdout: ""
Jan  4 12:25:34.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-jlbh5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 12:25:34.666: INFO: stderr: ""
Jan  4 12:25:34.666: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:25:34.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jlbh5" for this suite.
Jan  4 12:25:58.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:25:58.811: INFO: namespace: e2e-tests-kubectl-jlbh5, resource: bindings, ignored listing per whitelist
Jan  4 12:25:58.856: INFO: namespace e2e-tests-kubectl-jlbh5 deletion completed in 24.171893855s

• [SLOW TEST:41.719 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:25:58.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  4 12:25:59.126: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  4 12:25:59.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:25:59.669: INFO: stderr: ""
Jan  4 12:25:59.669: INFO: stdout: "service/redis-slave created\n"
Jan  4 12:25:59.670: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  4 12:25:59.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:00.298: INFO: stderr: ""
Jan  4 12:26:00.298: INFO: stdout: "service/redis-master created\n"
Jan  4 12:26:00.299: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  4 12:26:00.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:00.961: INFO: stderr: ""
Jan  4 12:26:00.961: INFO: stdout: "service/frontend created\n"
Jan  4 12:26:00.962: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  4 12:26:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:01.431: INFO: stderr: ""
Jan  4 12:26:01.431: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  4 12:26:01.432: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  4 12:26:01.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:01.968: INFO: stderr: ""
Jan  4 12:26:01.968: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  4 12:26:01.969: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  4 12:26:01.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:02.424: INFO: stderr: ""
Jan  4 12:26:02.424: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  4 12:26:02.424: INFO: Waiting for all frontend pods to be Running.
Jan  4 12:26:37.478: INFO: Waiting for frontend to serve content.
Jan  4 12:26:37.560: INFO: Trying to add a new entry to the guestbook.
Jan  4 12:26:37.640: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  4 12:26:37.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:38.061: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:38.062: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 12:26:38.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:38.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:38.432: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 12:26:38.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:38.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:38.892: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 12:26:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:39.079: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:39.080: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 12:26:39.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:39.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:39.352: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 12:26:39.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nfmsz'
Jan  4 12:26:39.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 12:26:39.642: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:26:39.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nfmsz" for this suite.
Jan  4 12:27:27.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:27:28.007: INFO: namespace: e2e-tests-kubectl-nfmsz, resource: bindings, ignored listing per whitelist
Jan  4 12:27:28.039: INFO: namespace e2e-tests-kubectl-nfmsz deletion completed in 48.374295747s

• [SLOW TEST:89.182 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:27:28.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  4 12:27:38.411: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:28:05.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-fthjt" for this suite.
Jan  4 12:28:11.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:28:12.046: INFO: namespace: e2e-tests-namespaces-fthjt, resource: bindings, ignored listing per whitelist
Jan  4 12:28:12.173: INFO: namespace e2e-tests-namespaces-fthjt deletion completed in 6.394082758s
STEP: Destroying namespace "e2e-tests-nsdeletetest-79znc" for this suite.
Jan  4 12:28:12.177: INFO: Namespace e2e-tests-nsdeletetest-79znc was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-k8nrv" for this suite.
Jan  4 12:28:18.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:28:18.590: INFO: namespace: e2e-tests-nsdeletetest-k8nrv, resource: bindings, ignored listing per whitelist
Jan  4 12:28:18.606: INFO: namespace e2e-tests-nsdeletetest-k8nrv deletion completed in 6.429319114s

• [SLOW TEST:50.567 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:28:18.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  4 12:28:18.939: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 12:28:19.027: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 12:28:19.043: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  4 12:28:19.079: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:28:19.079: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:28:19.079: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:28:19.079: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 12:28:19.079: INFO: 	Container coredns ready: true, restart count 0
Jan  4 12:28:19.079: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  4 12:28:19.079: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 12:28:19.079: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:28:19.079: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  4 12:28:19.079: INFO: 	Container weave ready: true, restart count 0
Jan  4 12:28:19.079: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 12:28:19.079: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 12:28:19.079: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b689f2a4-2eed-11ea-9996-0242ac110006 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b689f2a4-2eed-11ea-9996-0242ac110006 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b689f2a4-2eed-11ea-9996-0242ac110006
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:28:42.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qrx7l" for this suite.
Jan  4 12:29:06.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:29:06.359: INFO: namespace: e2e-tests-sched-pred-qrx7l, resource: bindings, ignored listing per whitelist
Jan  4 12:29:06.409: INFO: namespace e2e-tests-sched-pred-qrx7l deletion completed in 24.3446958s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.802 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:29:06.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-ccd4a46b-2eed-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:29:06.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-8x8t5" to be "success or failure"
Jan  4 12:29:06.760: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 34.27931ms
Jan  4 12:29:09.168: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443088116s
Jan  4 12:29:11.196: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470310786s
Jan  4 12:29:13.475: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750108667s
Jan  4 12:29:15.509: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78355589s
Jan  4 12:29:17.516: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.790536247s
Jan  4 12:29:19.528: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.802971236s
STEP: Saw pod success
Jan  4 12:29:19.528: INFO: Pod "pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:29:19.532: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan  4 12:29:21.038: INFO: Waiting for pod pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006 to disappear
Jan  4 12:29:21.165: INFO: Pod pod-configmaps-ccd6f5ad-2eed-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:29:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8x8t5" for this suite.
Jan  4 12:29:27.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:29:27.299: INFO: namespace: e2e-tests-configmap-8x8t5, resource: bindings, ignored listing per whitelist
Jan  4 12:29:27.369: INFO: namespace e2e-tests-configmap-8x8t5 deletion completed in 6.192534426s

• [SLOW TEST:20.959 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:29:27.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 12:29:27.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-69xdp" to be "success or failure"
Jan  4 12:29:27.715: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 113.113358ms
Jan  4 12:29:29.726: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124073638s
Jan  4 12:29:31.753: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151458479s
Jan  4 12:29:33.807: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205514638s
Jan  4 12:29:35.863: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260665122s
Jan  4 12:29:37.880: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.278498878s
STEP: Saw pod success
Jan  4 12:29:37.881: INFO: Pod "downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:29:37.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 12:29:38.106: INFO: Waiting for pod downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006 to disappear
Jan  4 12:29:38.165: INFO: Pod downwardapi-volume-d949d9c4-2eed-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:29:38.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-69xdp" for this suite.
Jan  4 12:29:44.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:29:44.374: INFO: namespace: e2e-tests-downward-api-69xdp, resource: bindings, ignored listing per whitelist
Jan  4 12:29:44.634: INFO: namespace e2e-tests-downward-api-69xdp deletion completed in 6.456680975s

• [SLOW TEST:17.265 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:29:44.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-82vxf
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  4 12:29:45.015: INFO: Found 0 stateful pods, waiting for 3
Jan  4 12:29:55.027: INFO: Found 2 stateful pods, waiting for 3
Jan  4 12:30:05.631: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:30:05.631: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:30:05.631: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 12:30:15.034: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:30:15.034: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:30:15.034: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  4 12:30:15.098: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  4 12:30:25.309: INFO: Updating stateful set ss2
Jan  4 12:30:25.341: INFO: Waiting for Pod e2e-tests-statefulset-82vxf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 12:30:35.384: INFO: Waiting for Pod e2e-tests-statefulset-82vxf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  4 12:30:47.707: INFO: Found 2 stateful pods, waiting for 3
Jan  4 12:30:57.728: INFO: Found 2 stateful pods, waiting for 3
Jan  4 12:31:07.755: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:31:07.755: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:31:07.755: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 12:31:17.725: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:31:17.726: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:31:17.726: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  4 12:31:17.818: INFO: Updating stateful set ss2
Jan  4 12:31:17.948: INFO: Waiting for Pod e2e-tests-statefulset-82vxf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 12:31:27.974: INFO: Waiting for Pod e2e-tests-statefulset-82vxf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 12:31:38.584: INFO: Updating stateful set ss2
Jan  4 12:31:38.945: INFO: Waiting for StatefulSet e2e-tests-statefulset-82vxf/ss2 to complete update
Jan  4 12:31:38.946: INFO: Waiting for Pod e2e-tests-statefulset-82vxf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 12:31:49.323: INFO: Waiting for StatefulSet e2e-tests-statefulset-82vxf/ss2 to complete update
Jan  4 12:31:59.016: INFO: Waiting for StatefulSet e2e-tests-statefulset-82vxf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  4 12:32:08.985: INFO: Deleting all statefulset in ns e2e-tests-statefulset-82vxf
Jan  4 12:32:08.991: INFO: Scaling statefulset ss2 to 0
Jan  4 12:32:59.038: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:32:59.053: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:32:59.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-82vxf" for this suite.
Jan  4 12:33:07.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:33:07.481: INFO: namespace: e2e-tests-statefulset-82vxf, resource: bindings, ignored listing per whitelist
Jan  4 12:33:07.610: INFO: namespace e2e-tests-statefulset-82vxf deletion completed in 8.47431364s

• [SLOW TEST:202.975 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:33:07.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-5ca25136-2eee-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:33:08.009: INFO: Waiting up to 5m0s for pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-drl6n" to be "success or failure"
Jan  4 12:33:08.027: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.305457ms
Jan  4 12:33:10.107: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097512926s
Jan  4 12:33:12.137: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127487494s
Jan  4 12:33:14.580: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570351224s
Jan  4 12:33:16.593: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.583780281s
Jan  4 12:33:18.637: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.62795198s
Jan  4 12:33:20.710: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.700824202s
STEP: Saw pod success
Jan  4 12:33:20.710: INFO: Pod "pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:33:20.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 12:33:21.070: INFO: Waiting for pod pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:33:21.092: INFO: Pod pod-secrets-5ca344be-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:33:21.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-drl6n" for this suite.
Jan  4 12:33:27.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:33:27.310: INFO: namespace: e2e-tests-secrets-drl6n, resource: bindings, ignored listing per whitelist
Jan  4 12:33:27.414: INFO: namespace e2e-tests-secrets-drl6n deletion completed in 6.247245341s

• [SLOW TEST:19.803 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:33:27.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-68a7de3e-2eee-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:33:28.447: INFO: Waiting up to 5m0s for pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-wnf6v" to be "success or failure"
Jan  4 12:33:28.471: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 23.640987ms
Jan  4 12:33:30.502: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054844776s
Jan  4 12:33:32.531: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084377976s
Jan  4 12:33:34.560: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112888424s
Jan  4 12:33:36.585: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138037616s
Jan  4 12:33:38.755: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.308015788s
Jan  4 12:33:40.799: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.352232373s
STEP: Saw pod success
Jan  4 12:33:40.800: INFO: Pod "pod-secrets-68d69251-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:33:40.808: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-68d69251-2eee-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 12:33:42.113: INFO: Waiting for pod pod-secrets-68d69251-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:33:42.129: INFO: Pod pod-secrets-68d69251-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:33:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wnf6v" for this suite.
Jan  4 12:33:48.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:33:48.383: INFO: namespace: e2e-tests-secrets-wnf6v, resource: bindings, ignored listing per whitelist
Jan  4 12:33:48.514: INFO: namespace e2e-tests-secrets-wnf6v deletion completed in 6.356936168s
STEP: Destroying namespace "e2e-tests-secret-namespace-pf5tk" for this suite.
Jan  4 12:33:54.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:33:54.819: INFO: namespace: e2e-tests-secret-namespace-pf5tk, resource: bindings, ignored listing per whitelist
Jan  4 12:33:54.819: INFO: namespace e2e-tests-secret-namespace-pf5tk deletion completed in 6.304609455s

• [SLOW TEST:27.405 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:33:54.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  4 12:34:11.566: INFO: Successfully updated pod "annotationupdate78a1c528-2eee-11ea-9996-0242ac110006"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:34:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q9htk" for this suite.
Jan  4 12:34:37.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:34:38.043: INFO: namespace: e2e-tests-projected-q9htk, resource: bindings, ignored listing per whitelist
Jan  4 12:34:38.261: INFO: namespace e2e-tests-projected-q9htk deletion completed in 24.410102658s

• [SLOW TEST:43.442 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:34:38.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-929512e2-2eee-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:34:38.561: INFO: Waiting up to 5m0s for pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-wqv86" to be "success or failure"
Jan  4 12:34:38.615: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 53.341087ms
Jan  4 12:34:40.626: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064293682s
Jan  4 12:34:42.667: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105341578s
Jan  4 12:34:45.252: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690644345s
Jan  4 12:34:47.429: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.867805108s
Jan  4 12:34:49.543: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.981706323s
STEP: Saw pod success
Jan  4 12:34:49.544: INFO: Pod "pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:34:49.646: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006 container secret-env-test: 
STEP: delete the pod
Jan  4 12:34:49.712: INFO: Waiting for pod pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:34:49.834: INFO: Pod pod-secrets-9296d91a-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:34:49.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wqv86" for this suite.
Jan  4 12:34:55.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:34:55.992: INFO: namespace: e2e-tests-secrets-wqv86, resource: bindings, ignored listing per whitelist
Jan  4 12:34:56.053: INFO: namespace e2e-tests-secrets-wqv86 deletion completed in 6.209025338s

• [SLOW TEST:17.791 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:34:56.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  4 12:34:56.516: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2d5hv,SelfLink:/api/v1/namespaces/e2e-tests-watch-2d5hv/configmaps/e2e-watch-test-resource-version,UID:9d3a9e7f-2eee-11ea-a994-fa163e34d433,ResourceVersion:17139083,Generation:0,CreationTimestamp:2020-01-04 12:34:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 12:34:56.517: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2d5hv,SelfLink:/api/v1/namespaces/e2e-tests-watch-2d5hv/configmaps/e2e-watch-test-resource-version,UID:9d3a9e7f-2eee-11ea-a994-fa163e34d433,ResourceVersion:17139084,Generation:0,CreationTimestamp:2020-01-04 12:34:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:34:56.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2d5hv" for this suite.
Jan  4 12:35:04.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:35:04.759: INFO: namespace: e2e-tests-watch-2d5hv, resource: bindings, ignored listing per whitelist
Jan  4 12:35:04.783: INFO: namespace e2e-tests-watch-2d5hv deletion completed in 8.165836575s

• [SLOW TEST:8.729 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:35:04.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  4 12:35:15.138: INFO: Pod pod-hostip-a26d202f-2eee-11ea-9996-0242ac110006 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:35:15.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4tmwh" for this suite.
Jan  4 12:35:39.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:35:39.310: INFO: namespace: e2e-tests-pods-4tmwh, resource: bindings, ignored listing per whitelist
Jan  4 12:35:39.346: INFO: namespace e2e-tests-pods-4tmwh deletion completed in 24.193566564s

• [SLOW TEST:34.563 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:35:39.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 12:35:39.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-6tms6" to be "success or failure"
Jan  4 12:35:39.658: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.776498ms
Jan  4 12:35:41.671: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025762703s
Jan  4 12:35:43.689: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044253596s
Jan  4 12:35:46.062: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41643688s
Jan  4 12:35:48.087: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.442210138s
Jan  4 12:35:50.103: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.458256997s
STEP: Saw pod success
Jan  4 12:35:50.104: INFO: Pod "downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:35:50.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 12:35:51.879: INFO: Waiting for pod downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:35:51.921: INFO: Pod downwardapi-volume-b70aed01-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:35:51.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6tms6" for this suite.
Jan  4 12:36:00.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:36:00.284: INFO: namespace: e2e-tests-projected-6tms6, resource: bindings, ignored listing per whitelist
Jan  4 12:36:00.396: INFO: namespace e2e-tests-projected-6tms6 deletion completed in 8.349698513s

• [SLOW TEST:21.050 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:36:00.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 12:36:00.871: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.684576ms)
Jan  4 12:36:00.892: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.391791ms)
Jan  4 12:36:00.904: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.151546ms)
Jan  4 12:36:00.909: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.87683ms)
Jan  4 12:36:00.914: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.020825ms)
Jan  4 12:36:00.920: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.475248ms)
Jan  4 12:36:01.023: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 102.511382ms)
Jan  4 12:36:01.033: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.385789ms)
Jan  4 12:36:01.039: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.796698ms)
Jan  4 12:36:01.045: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.192108ms)
Jan  4 12:36:01.052: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.439262ms)
Jan  4 12:36:01.058: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.847701ms)
Jan  4 12:36:01.062: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.669482ms)
Jan  4 12:36:01.068: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.246184ms)
Jan  4 12:36:01.072: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.256299ms)
Jan  4 12:36:01.076: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.849647ms)
Jan  4 12:36:01.081: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.62171ms)
Jan  4 12:36:01.087: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.219642ms)
Jan  4 12:36:01.091: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.819844ms)
Jan  4 12:36:01.095: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.732763ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:36:01.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kpbhz" for this suite.
Jan  4 12:36:07.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:36:07.303: INFO: namespace: e2e-tests-proxy-kpbhz, resource: bindings, ignored listing per whitelist
Jan  4 12:36:07.350: INFO: namespace e2e-tests-proxy-kpbhz deletion completed in 6.25064147s

• [SLOW TEST:6.953 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:36:07.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  4 12:36:07.591: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix859844360/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:36:07.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-825qr" for this suite.
Jan  4 12:36:15.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:36:15.902: INFO: namespace: e2e-tests-kubectl-825qr, resource: bindings, ignored listing per whitelist
Jan  4 12:36:15.971: INFO: namespace e2e-tests-kubectl-825qr deletion completed in 8.237888546s

• [SLOW TEST:8.620 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:36:15.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  4 12:36:16.296: INFO: Waiting up to 5m0s for pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-67nlv" to be "success or failure"
Jan  4 12:36:16.320: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.97847ms
Jan  4 12:36:18.490: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193332392s
Jan  4 12:36:20.530: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233086431s
Jan  4 12:36:22.968: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670943362s
Jan  4 12:36:25.035: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738868571s
Jan  4 12:36:27.101: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.804151211s
STEP: Saw pod success
Jan  4 12:36:27.101: INFO: Pod "pod-ccd6b0d4-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:36:27.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ccd6b0d4-2eee-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 12:36:27.381: INFO: Waiting for pod pod-ccd6b0d4-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:36:27.482: INFO: Pod pod-ccd6b0d4-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:36:27.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-67nlv" for this suite.
Jan  4 12:36:33.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:36:33.718: INFO: namespace: e2e-tests-emptydir-67nlv, resource: bindings, ignored listing per whitelist
Jan  4 12:36:33.765: INFO: namespace e2e-tests-emptydir-67nlv deletion completed in 6.272831059s

• [SLOW TEST:17.794 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:36:33.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:36:44.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-299qr" for this suite.
Jan  4 12:36:50.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:36:51.856: INFO: namespace: e2e-tests-emptydir-wrapper-299qr, resource: bindings, ignored listing per whitelist
Jan  4 12:36:51.894: INFO: namespace e2e-tests-emptydir-wrapper-299qr deletion completed in 7.085215482s

• [SLOW TEST:18.128 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:36:51.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  4 12:37:06.868: INFO: Successfully updated pod "annotationupdatee242ed38-2eee-11ea-9996-0242ac110006"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:37:08.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b42nm" for this suite.
Jan  4 12:37:33.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:37:33.205: INFO: namespace: e2e-tests-downward-api-b42nm, resource: bindings, ignored listing per whitelist
Jan  4 12:37:33.316: INFO: namespace e2e-tests-downward-api-b42nm deletion completed in 24.307486388s

• [SLOW TEST:41.422 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:37:33.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-faedd4d6-2eee-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:37:33.737: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-hfbbh" to be "success or failure"
Jan  4 12:37:33.774: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 37.682261ms
Jan  4 12:37:36.627: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890107818s
Jan  4 12:37:38.664: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.92761345s
Jan  4 12:37:41.575: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.838122867s
Jan  4 12:37:43.608: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.870978058s
Jan  4 12:37:45.841: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.104055553s
Jan  4 12:37:47.900: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.162912966s
STEP: Saw pod success
Jan  4 12:37:47.900: INFO: Pod "pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:37:47.906: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 12:37:48.570: INFO: Waiting for pod pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006 to disappear
Jan  4 12:37:48.602: INFO: Pod pod-projected-secrets-faf2c624-2eee-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:37:48.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hfbbh" for this suite.
Jan  4 12:37:56.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:37:56.919: INFO: namespace: e2e-tests-projected-hfbbh, resource: bindings, ignored listing per whitelist
Jan  4 12:37:56.968: INFO: namespace e2e-tests-projected-hfbbh deletion completed in 8.319357069s

• [SLOW TEST:23.652 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:37:56.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-s9l94
I0104 12:37:57.458529       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-s9l94, replica count: 1
I0104 12:37:58.509987       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:37:59.511062       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:00.511598       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:01.512405       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:02.513078       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:03.513562       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:04.514328       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:05.514789       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:06.515238       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:07.515761       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 12:38:08.516375       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  4 12:38:08.694: INFO: Created: latency-svc-qcj22
Jan  4 12:38:08.867: INFO: Got endpoints: latency-svc-qcj22 [251.009783ms]
Jan  4 12:38:09.012: INFO: Created: latency-svc-f62mz
Jan  4 12:38:09.086: INFO: Got endpoints: latency-svc-f62mz [217.903595ms]
Jan  4 12:38:09.280: INFO: Created: latency-svc-zqr2m
Jan  4 12:38:09.315: INFO: Got endpoints: latency-svc-zqr2m [445.102753ms]
Jan  4 12:38:09.505: INFO: Created: latency-svc-kz2bh
Jan  4 12:38:09.517: INFO: Got endpoints: latency-svc-kz2bh [647.571961ms]
Jan  4 12:38:09.733: INFO: Created: latency-svc-z7hmt
Jan  4 12:38:09.972: INFO: Got endpoints: latency-svc-z7hmt [1.101589817s]
Jan  4 12:38:09.983: INFO: Created: latency-svc-9qwrx
Jan  4 12:38:09.994: INFO: Got endpoints: latency-svc-9qwrx [1.123641969s]
Jan  4 12:38:10.209: INFO: Created: latency-svc-vgczz
Jan  4 12:38:10.247: INFO: Got endpoints: latency-svc-vgczz [1.376945598s]
Jan  4 12:38:10.444: INFO: Created: latency-svc-xpgw4
Jan  4 12:38:10.458: INFO: Got endpoints: latency-svc-xpgw4 [1.588570545s]
Jan  4 12:38:10.659: INFO: Created: latency-svc-m5wqk
Jan  4 12:38:10.699: INFO: Got endpoints: latency-svc-m5wqk [1.828541262s]
Jan  4 12:38:10.975: INFO: Created: latency-svc-cx9rw
Jan  4 12:38:10.996: INFO: Got endpoints: latency-svc-cx9rw [2.125587426s]
Jan  4 12:38:11.252: INFO: Created: latency-svc-nz96n
Jan  4 12:38:11.269: INFO: Got endpoints: latency-svc-nz96n [2.398098565s]
Jan  4 12:38:11.537: INFO: Created: latency-svc-k72jg
Jan  4 12:38:11.580: INFO: Got endpoints: latency-svc-k72jg [2.709551568s]
Jan  4 12:38:11.845: INFO: Created: latency-svc-d4rnn
Jan  4 12:38:11.869: INFO: Got endpoints: latency-svc-d4rnn [2.998466145s]
Jan  4 12:38:12.086: INFO: Created: latency-svc-zkwcm
Jan  4 12:38:12.104: INFO: Got endpoints: latency-svc-zkwcm [3.234410198s]
Jan  4 12:38:12.400: INFO: Created: latency-svc-946dp
Jan  4 12:38:12.410: INFO: Got endpoints: latency-svc-946dp [3.539766055s]
Jan  4 12:38:12.724: INFO: Created: latency-svc-4ql6h
Jan  4 12:38:12.758: INFO: Got endpoints: latency-svc-4ql6h [3.887154099s]
Jan  4 12:38:13.085: INFO: Created: latency-svc-792ml
Jan  4 12:38:13.113: INFO: Got endpoints: latency-svc-792ml [4.026444974s]
Jan  4 12:38:13.302: INFO: Created: latency-svc-fcnc7
Jan  4 12:38:13.321: INFO: Got endpoints: latency-svc-fcnc7 [4.006593156s]
Jan  4 12:38:13.620: INFO: Created: latency-svc-m7v45
Jan  4 12:38:13.620: INFO: Got endpoints: latency-svc-m7v45 [4.102602709s]
Jan  4 12:38:13.819: INFO: Created: latency-svc-8kbcr
Jan  4 12:38:13.839: INFO: Got endpoints: latency-svc-8kbcr [3.867191581s]
Jan  4 12:38:14.187: INFO: Created: latency-svc-n7j66
Jan  4 12:38:14.203: INFO: Got endpoints: latency-svc-n7j66 [4.209620858s]
Jan  4 12:38:14.458: INFO: Created: latency-svc-nxwm5
Jan  4 12:38:14.478: INFO: Got endpoints: latency-svc-nxwm5 [4.230870992s]
Jan  4 12:38:14.723: INFO: Created: latency-svc-4qndl
Jan  4 12:38:14.745: INFO: Got endpoints: latency-svc-4qndl [4.286650567s]
Jan  4 12:38:14.970: INFO: Created: latency-svc-zpfs5
Jan  4 12:38:14.999: INFO: Got endpoints: latency-svc-zpfs5 [4.29947098s]
Jan  4 12:38:15.224: INFO: Created: latency-svc-np6xx
Jan  4 12:38:15.243: INFO: Got endpoints: latency-svc-np6xx [4.24730872s]
Jan  4 12:38:15.321: INFO: Created: latency-svc-tm62l
Jan  4 12:38:15.346: INFO: Got endpoints: latency-svc-tm62l [4.076865425s]
Jan  4 12:38:15.590: INFO: Created: latency-svc-s7r8v
Jan  4 12:38:15.612: INFO: Got endpoints: latency-svc-s7r8v [4.031462485s]
Jan  4 12:38:15.798: INFO: Created: latency-svc-sfd7x
Jan  4 12:38:15.836: INFO: Got endpoints: latency-svc-sfd7x [3.967596458s]
Jan  4 12:38:16.011: INFO: Created: latency-svc-thvk8
Jan  4 12:38:16.036: INFO: Got endpoints: latency-svc-thvk8 [3.931593273s]
Jan  4 12:38:16.606: INFO: Created: latency-svc-6zdfm
Jan  4 12:38:16.633: INFO: Got endpoints: latency-svc-6zdfm [4.222668336s]
Jan  4 12:38:16.814: INFO: Created: latency-svc-8fp97
Jan  4 12:38:16.855: INFO: Got endpoints: latency-svc-8fp97 [4.096977598s]
Jan  4 12:38:17.257: INFO: Created: latency-svc-t5msl
Jan  4 12:38:17.257: INFO: Got endpoints: latency-svc-t5msl [4.144295991s]
Jan  4 12:38:17.434: INFO: Created: latency-svc-cq85l
Jan  4 12:38:17.454: INFO: Got endpoints: latency-svc-cq85l [4.132121824s]
Jan  4 12:38:18.044: INFO: Created: latency-svc-9nzdf
Jan  4 12:38:18.074: INFO: Got endpoints: latency-svc-9nzdf [4.45439593s]
Jan  4 12:38:18.269: INFO: Created: latency-svc-vwzb4
Jan  4 12:38:18.294: INFO: Got endpoints: latency-svc-vwzb4 [4.454295497s]
Jan  4 12:38:18.368: INFO: Created: latency-svc-ctfp5
Jan  4 12:38:18.508: INFO: Got endpoints: latency-svc-ctfp5 [4.304469684s]
Jan  4 12:38:18.529: INFO: Created: latency-svc-xgplg
Jan  4 12:38:18.558: INFO: Got endpoints: latency-svc-xgplg [4.079628454s]
Jan  4 12:38:18.756: INFO: Created: latency-svc-27x7m
Jan  4 12:38:18.787: INFO: Got endpoints: latency-svc-27x7m [4.041925094s]
Jan  4 12:38:19.051: INFO: Created: latency-svc-z7wd9
Jan  4 12:38:19.086: INFO: Got endpoints: latency-svc-z7wd9 [4.087206512s]
Jan  4 12:38:19.151: INFO: Created: latency-svc-kjvzn
Jan  4 12:38:19.286: INFO: Got endpoints: latency-svc-kjvzn [4.042589875s]
Jan  4 12:38:19.317: INFO: Created: latency-svc-krsjj
Jan  4 12:38:19.324: INFO: Got endpoints: latency-svc-krsjj [3.978316617s]
Jan  4 12:38:19.369: INFO: Created: latency-svc-mnczh
Jan  4 12:38:19.531: INFO: Got endpoints: latency-svc-mnczh [3.919245346s]
Jan  4 12:38:19.566: INFO: Created: latency-svc-2qdh8
Jan  4 12:38:19.583: INFO: Got endpoints: latency-svc-2qdh8 [3.746332832s]
Jan  4 12:38:19.634: INFO: Created: latency-svc-qm6k8
Jan  4 12:38:20.247: INFO: Got endpoints: latency-svc-qm6k8 [4.210989729s]
Jan  4 12:38:20.298: INFO: Created: latency-svc-mt4cr
Jan  4 12:38:20.449: INFO: Got endpoints: latency-svc-mt4cr [3.815793714s]
Jan  4 12:38:20.496: INFO: Created: latency-svc-hwrbw
Jan  4 12:38:20.525: INFO: Got endpoints: latency-svc-hwrbw [3.67004288s]
Jan  4 12:38:20.676: INFO: Created: latency-svc-7rqgm
Jan  4 12:38:20.689: INFO: Got endpoints: latency-svc-7rqgm [3.431289224s]
Jan  4 12:38:20.884: INFO: Created: latency-svc-p7vdj
Jan  4 12:38:20.921: INFO: Got endpoints: latency-svc-p7vdj [3.467576885s]
Jan  4 12:38:21.064: INFO: Created: latency-svc-xf5ns
Jan  4 12:38:21.068: INFO: Got endpoints: latency-svc-xf5ns [2.993394765s]
Jan  4 12:38:21.147: INFO: Created: latency-svc-mnwlm
Jan  4 12:38:21.275: INFO: Got endpoints: latency-svc-mnwlm [2.980789785s]
Jan  4 12:38:21.312: INFO: Created: latency-svc-62q27
Jan  4 12:38:21.318: INFO: Got endpoints: latency-svc-62q27 [2.809820645s]
Jan  4 12:38:21.503: INFO: Created: latency-svc-qbmzc
Jan  4 12:38:21.536: INFO: Got endpoints: latency-svc-qbmzc [2.977480759s]
Jan  4 12:38:21.591: INFO: Created: latency-svc-d2hvs
Jan  4 12:38:21.738: INFO: Got endpoints: latency-svc-d2hvs [2.950006463s]
Jan  4 12:38:21.803: INFO: Created: latency-svc-wrglm
Jan  4 12:38:21.804: INFO: Got endpoints: latency-svc-wrglm [2.717302491s]
Jan  4 12:38:21.943: INFO: Created: latency-svc-n75gc
Jan  4 12:38:21.965: INFO: Got endpoints: latency-svc-n75gc [2.678328638s]
Jan  4 12:38:22.007: INFO: Created: latency-svc-2hql2
Jan  4 12:38:22.106: INFO: Got endpoints: latency-svc-2hql2 [301.91742ms]
Jan  4 12:38:22.137: INFO: Created: latency-svc-jbdxb
Jan  4 12:38:22.146: INFO: Got endpoints: latency-svc-jbdxb [2.821998991s]
Jan  4 12:38:22.386: INFO: Created: latency-svc-s4tvh
Jan  4 12:38:22.392: INFO: Got endpoints: latency-svc-s4tvh [2.86095069s]
Jan  4 12:38:22.433: INFO: Created: latency-svc-kzzxs
Jan  4 12:38:22.564: INFO: Got endpoints: latency-svc-kzzxs [2.980752515s]
Jan  4 12:38:22.592: INFO: Created: latency-svc-jcct6
Jan  4 12:38:22.621: INFO: Got endpoints: latency-svc-jcct6 [2.37395368s]
Jan  4 12:38:22.854: INFO: Created: latency-svc-qrd77
Jan  4 12:38:22.887: INFO: Got endpoints: latency-svc-qrd77 [2.437530064s]
Jan  4 12:38:23.029: INFO: Created: latency-svc-vf9ft
Jan  4 12:38:23.030: INFO: Got endpoints: latency-svc-vf9ft [2.504561948s]
Jan  4 12:38:23.094: INFO: Created: latency-svc-4cr6m
Jan  4 12:38:23.207: INFO: Got endpoints: latency-svc-4cr6m [2.518549915s]
Jan  4 12:38:23.233: INFO: Created: latency-svc-rkh9h
Jan  4 12:38:23.247: INFO: Got endpoints: latency-svc-rkh9h [2.325441759s]
Jan  4 12:38:23.425: INFO: Created: latency-svc-xdb88
Jan  4 12:38:23.441: INFO: Got endpoints: latency-svc-xdb88 [2.372737583s]
Jan  4 12:38:23.496: INFO: Created: latency-svc-9w685
Jan  4 12:38:23.611: INFO: Got endpoints: latency-svc-9w685 [2.336679675s]
Jan  4 12:38:23.668: INFO: Created: latency-svc-jxgcx
Jan  4 12:38:23.764: INFO: Got endpoints: latency-svc-jxgcx [2.445893803s]
Jan  4 12:38:24.786: INFO: Created: latency-svc-hr2cl
Jan  4 12:38:24.856: INFO: Got endpoints: latency-svc-hr2cl [3.320473652s]
Jan  4 12:38:25.119: INFO: Created: latency-svc-mxxrn
Jan  4 12:38:25.138: INFO: Got endpoints: latency-svc-mxxrn [3.400044315s]
Jan  4 12:38:25.218: INFO: Created: latency-svc-lj7sx
Jan  4 12:38:25.292: INFO: Got endpoints: latency-svc-lj7sx [3.327064096s]
Jan  4 12:38:25.355: INFO: Created: latency-svc-hwqth
Jan  4 12:38:25.367: INFO: Got endpoints: latency-svc-hwqth [3.26101104s]
Jan  4 12:38:25.543: INFO: Created: latency-svc-j7km8
Jan  4 12:38:25.555: INFO: Got endpoints: latency-svc-j7km8 [3.40883304s]
Jan  4 12:38:25.605: INFO: Created: latency-svc-9968f
Jan  4 12:38:25.618: INFO: Got endpoints: latency-svc-9968f [3.226165965s]
Jan  4 12:38:25.761: INFO: Created: latency-svc-8nkvg
Jan  4 12:38:25.848: INFO: Got endpoints: latency-svc-8nkvg [3.28356215s]
Jan  4 12:38:25.989: INFO: Created: latency-svc-cgc27
Jan  4 12:38:25.995: INFO: Got endpoints: latency-svc-cgc27 [3.373998254s]
Jan  4 12:38:26.057: INFO: Created: latency-svc-fmrtv
Jan  4 12:38:26.146: INFO: Got endpoints: latency-svc-fmrtv [3.259138507s]
Jan  4 12:38:26.192: INFO: Created: latency-svc-h8nnc
Jan  4 12:38:26.205: INFO: Got endpoints: latency-svc-h8nnc [3.175613631s]
Jan  4 12:38:26.408: INFO: Created: latency-svc-s5r8n
Jan  4 12:38:26.432: INFO: Got endpoints: latency-svc-s5r8n [3.224452444s]
Jan  4 12:38:26.658: INFO: Created: latency-svc-78hgs
Jan  4 12:38:26.667: INFO: Got endpoints: latency-svc-78hgs [3.41960862s]
Jan  4 12:38:26.796: INFO: Created: latency-svc-jslb4
Jan  4 12:38:26.977: INFO: Got endpoints: latency-svc-jslb4 [3.53579467s]
Jan  4 12:38:27.021: INFO: Created: latency-svc-58s77
Jan  4 12:38:27.046: INFO: Got endpoints: latency-svc-58s77 [3.434040095s]
Jan  4 12:38:27.184: INFO: Created: latency-svc-8htqd
Jan  4 12:38:27.207: INFO: Got endpoints: latency-svc-8htqd [3.442952051s]
Jan  4 12:38:27.375: INFO: Created: latency-svc-t4g6q
Jan  4 12:38:27.551: INFO: Got endpoints: latency-svc-t4g6q [2.693661207s]
Jan  4 12:38:27.560: INFO: Created: latency-svc-2l6fb
Jan  4 12:38:27.600: INFO: Got endpoints: latency-svc-2l6fb [2.461480437s]
Jan  4 12:38:27.768: INFO: Created: latency-svc-2prp7
Jan  4 12:38:27.795: INFO: Got endpoints: latency-svc-2prp7 [2.502683814s]
Jan  4 12:38:27.963: INFO: Created: latency-svc-qshtk
Jan  4 12:38:27.976: INFO: Got endpoints: latency-svc-qshtk [2.609469854s]
Jan  4 12:38:28.070: INFO: Created: latency-svc-5tpws
Jan  4 12:38:28.155: INFO: Got endpoints: latency-svc-5tpws [2.599498938s]
Jan  4 12:38:28.216: INFO: Created: latency-svc-d7bv5
Jan  4 12:38:28.221: INFO: Got endpoints: latency-svc-d7bv5 [2.602494021s]
Jan  4 12:38:28.345: INFO: Created: latency-svc-582vv
Jan  4 12:38:28.364: INFO: Got endpoints: latency-svc-582vv [2.516047516s]
Jan  4 12:38:28.569: INFO: Created: latency-svc-msczd
Jan  4 12:38:28.593: INFO: Got endpoints: latency-svc-msczd [2.597750257s]
Jan  4 12:38:28.788: INFO: Created: latency-svc-b6bhz
Jan  4 12:38:28.804: INFO: Got endpoints: latency-svc-b6bhz [2.657441247s]
Jan  4 12:38:28.959: INFO: Created: latency-svc-2dc4k
Jan  4 12:38:28.963: INFO: Got endpoints: latency-svc-2dc4k [2.757459697s]
Jan  4 12:38:29.153: INFO: Created: latency-svc-9s4gs
Jan  4 12:38:29.206: INFO: Got endpoints: latency-svc-9s4gs [2.773732186s]
Jan  4 12:38:29.317: INFO: Created: latency-svc-q6m7d
Jan  4 12:38:29.351: INFO: Got endpoints: latency-svc-q6m7d [2.684365677s]
Jan  4 12:38:29.590: INFO: Created: latency-svc-ppknw
Jan  4 12:38:29.603: INFO: Got endpoints: latency-svc-ppknw [2.625692592s]
Jan  4 12:38:29.840: INFO: Created: latency-svc-rgtb4
Jan  4 12:38:29.870: INFO: Got endpoints: latency-svc-rgtb4 [2.824007688s]
Jan  4 12:38:30.039: INFO: Created: latency-svc-pc8rt
Jan  4 12:38:30.051: INFO: Got endpoints: latency-svc-pc8rt [2.843650564s]
Jan  4 12:38:30.103: INFO: Created: latency-svc-cbv29
Jan  4 12:38:30.204: INFO: Got endpoints: latency-svc-cbv29 [2.652760478s]
Jan  4 12:38:30.228: INFO: Created: latency-svc-bwqhd
Jan  4 12:38:30.242: INFO: Got endpoints: latency-svc-bwqhd [2.642112038s]
Jan  4 12:38:30.383: INFO: Created: latency-svc-x8gdq
Jan  4 12:38:30.387: INFO: Got endpoints: latency-svc-x8gdq [2.591499479s]
Jan  4 12:38:30.451: INFO: Created: latency-svc-2xkfl
Jan  4 12:38:30.468: INFO: Got endpoints: latency-svc-2xkfl [2.491911307s]
Jan  4 12:38:30.659: INFO: Created: latency-svc-g25ws
Jan  4 12:38:30.689: INFO: Got endpoints: latency-svc-g25ws [2.533122155s]
Jan  4 12:38:30.814: INFO: Created: latency-svc-f4zvc
Jan  4 12:38:30.817: INFO: Got endpoints: latency-svc-f4zvc [2.595978186s]
Jan  4 12:38:30.875: INFO: Created: latency-svc-ldlhp
Jan  4 12:38:30.875: INFO: Got endpoints: latency-svc-ldlhp [2.510724132s]
Jan  4 12:38:30.981: INFO: Created: latency-svc-vvxqx
Jan  4 12:38:30.998: INFO: Got endpoints: latency-svc-vvxqx [2.404878275s]
Jan  4 12:38:31.061: INFO: Created: latency-svc-cgdtc
Jan  4 12:38:31.180: INFO: Got endpoints: latency-svc-cgdtc [2.37550468s]
Jan  4 12:38:31.218: INFO: Created: latency-svc-4cqjt
Jan  4 12:38:31.245: INFO: Got endpoints: latency-svc-4cqjt [2.282385418s]
Jan  4 12:38:31.380: INFO: Created: latency-svc-2ldbm
Jan  4 12:38:31.397: INFO: Got endpoints: latency-svc-2ldbm [2.191094718s]
Jan  4 12:38:31.637: INFO: Created: latency-svc-8lc8l
Jan  4 12:38:31.637: INFO: Got endpoints: latency-svc-8lc8l [2.285520847s]
Jan  4 12:38:31.810: INFO: Created: latency-svc-qdq7k
Jan  4 12:38:31.841: INFO: Got endpoints: latency-svc-qdq7k [2.238515257s]
Jan  4 12:38:31.850: INFO: Created: latency-svc-9fj4r
Jan  4 12:38:31.879: INFO: Got endpoints: latency-svc-9fj4r [2.008578654s]
Jan  4 12:38:32.030: INFO: Created: latency-svc-vxqkq
Jan  4 12:38:32.067: INFO: Got endpoints: latency-svc-vxqkq [2.015494201s]
Jan  4 12:38:32.185: INFO: Created: latency-svc-fzdvt
Jan  4 12:38:32.202: INFO: Got endpoints: latency-svc-fzdvt [1.997785396s]
Jan  4 12:38:32.270: INFO: Created: latency-svc-cgn5b
Jan  4 12:38:32.391: INFO: Got endpoints: latency-svc-cgn5b [2.148907318s]
Jan  4 12:38:32.426: INFO: Created: latency-svc-nhffn
Jan  4 12:38:32.432: INFO: Got endpoints: latency-svc-nhffn [2.045771153s]
Jan  4 12:38:32.593: INFO: Created: latency-svc-rzd4h
Jan  4 12:38:32.608: INFO: Got endpoints: latency-svc-rzd4h [2.139162566s]
Jan  4 12:38:32.684: INFO: Created: latency-svc-c7g9k
Jan  4 12:38:32.848: INFO: Got endpoints: latency-svc-c7g9k [2.158610589s]
Jan  4 12:38:32.890: INFO: Created: latency-svc-74dcw
Jan  4 12:38:32.913: INFO: Got endpoints: latency-svc-74dcw [2.096129529s]
Jan  4 12:38:33.029: INFO: Created: latency-svc-gg89v
Jan  4 12:38:33.052: INFO: Got endpoints: latency-svc-gg89v [2.176938019s]
Jan  4 12:38:33.280: INFO: Created: latency-svc-vgrvk
Jan  4 12:38:33.294: INFO: Got endpoints: latency-svc-vgrvk [2.295484069s]
Jan  4 12:38:33.345: INFO: Created: latency-svc-lzj9w
Jan  4 12:38:33.473: INFO: Got endpoints: latency-svc-lzj9w [2.292495075s]
Jan  4 12:38:33.497: INFO: Created: latency-svc-xdfkw
Jan  4 12:38:33.515: INFO: Got endpoints: latency-svc-xdfkw [2.269869816s]
Jan  4 12:38:33.691: INFO: Created: latency-svc-cj5bc
Jan  4 12:38:33.696: INFO: Got endpoints: latency-svc-cj5bc [2.29884987s]
Jan  4 12:38:33.882: INFO: Created: latency-svc-n2dx7
Jan  4 12:38:33.922: INFO: Got endpoints: latency-svc-n2dx7 [2.2845191s]
Jan  4 12:38:34.213: INFO: Created: latency-svc-bgkjn
Jan  4 12:38:34.229: INFO: Got endpoints: latency-svc-bgkjn [2.387448403s]
Jan  4 12:38:34.408: INFO: Created: latency-svc-l7q8p
Jan  4 12:38:34.419: INFO: Got endpoints: latency-svc-l7q8p [2.540361873s]
Jan  4 12:38:34.518: INFO: Created: latency-svc-kl9kd
Jan  4 12:38:34.757: INFO: Got endpoints: latency-svc-kl9kd [2.690121116s]
Jan  4 12:38:34.803: INFO: Created: latency-svc-mkk2b
Jan  4 12:38:34.862: INFO: Got endpoints: latency-svc-mkk2b [2.659662128s]
Jan  4 12:38:35.006: INFO: Created: latency-svc-vxmdf
Jan  4 12:38:35.028: INFO: Got endpoints: latency-svc-vxmdf [2.636304296s]
Jan  4 12:38:35.222: INFO: Created: latency-svc-gbgvn
Jan  4 12:38:35.222: INFO: Got endpoints: latency-svc-gbgvn [2.789932828s]
Jan  4 12:38:35.439: INFO: Created: latency-svc-84dxl
Jan  4 12:38:35.441: INFO: Got endpoints: latency-svc-84dxl [2.833091121s]
Jan  4 12:38:35.670: INFO: Created: latency-svc-cxsqz
Jan  4 12:38:35.689: INFO: Got endpoints: latency-svc-cxsqz [2.841242259s]
Jan  4 12:38:35.781: INFO: Created: latency-svc-fc4fn
Jan  4 12:38:35.782: INFO: Got endpoints: latency-svc-fc4fn [2.867962518s]
Jan  4 12:38:35.965: INFO: Created: latency-svc-c2ttm
Jan  4 12:38:35.988: INFO: Got endpoints: latency-svc-c2ttm [2.935155173s]
Jan  4 12:38:36.233: INFO: Created: latency-svc-k9fcn
Jan  4 12:38:36.233: INFO: Got endpoints: latency-svc-k9fcn [2.938330589s]
Jan  4 12:38:36.437: INFO: Created: latency-svc-9bbwl
Jan  4 12:38:36.472: INFO: Got endpoints: latency-svc-9bbwl [2.998845167s]
Jan  4 12:38:36.648: INFO: Created: latency-svc-rr5v8
Jan  4 12:38:36.661: INFO: Got endpoints: latency-svc-rr5v8 [3.144928369s]
Jan  4 12:38:36.836: INFO: Created: latency-svc-b8k8j
Jan  4 12:38:36.859: INFO: Got endpoints: latency-svc-b8k8j [3.163026922s]
Jan  4 12:38:38.062: INFO: Created: latency-svc-8rc66
Jan  4 12:38:38.093: INFO: Got endpoints: latency-svc-8rc66 [4.171564936s]
Jan  4 12:38:38.465: INFO: Created: latency-svc-hnkzc
Jan  4 12:38:38.673: INFO: Got endpoints: latency-svc-hnkzc [4.443453179s]
Jan  4 12:38:39.339: INFO: Created: latency-svc-pmblz
Jan  4 12:38:39.522: INFO: Created: latency-svc-m9kf2
Jan  4 12:38:39.575: INFO: Got endpoints: latency-svc-pmblz [5.155020847s]
Jan  4 12:38:39.586: INFO: Got endpoints: latency-svc-m9kf2 [4.828317867s]
Jan  4 12:38:39.833: INFO: Created: latency-svc-t72cn
Jan  4 12:38:39.885: INFO: Got endpoints: latency-svc-t72cn [5.02246372s]
Jan  4 12:38:40.091: INFO: Created: latency-svc-gwfp4
Jan  4 12:38:40.092: INFO: Got endpoints: latency-svc-gwfp4 [5.063761184s]
Jan  4 12:38:40.213: INFO: Created: latency-svc-jfsz2
Jan  4 12:38:40.256: INFO: Got endpoints: latency-svc-jfsz2 [5.033636213s]
Jan  4 12:38:40.316: INFO: Created: latency-svc-nfc6z
Jan  4 12:38:40.457: INFO: Got endpoints: latency-svc-nfc6z [5.0158263s]
Jan  4 12:38:40.561: INFO: Created: latency-svc-bkqk5
Jan  4 12:38:40.745: INFO: Got endpoints: latency-svc-bkqk5 [5.055879329s]
Jan  4 12:38:40.831: INFO: Created: latency-svc-ltrkn
Jan  4 12:38:41.033: INFO: Got endpoints: latency-svc-ltrkn [5.251253937s]
Jan  4 12:38:41.049: INFO: Created: latency-svc-8rnfl
Jan  4 12:38:41.087: INFO: Got endpoints: latency-svc-8rnfl [5.099015739s]
Jan  4 12:38:41.352: INFO: Created: latency-svc-lb4zj
Jan  4 12:38:41.387: INFO: Got endpoints: latency-svc-lb4zj [5.153829817s]
Jan  4 12:38:41.622: INFO: Created: latency-svc-mcvh5
Jan  4 12:38:41.639: INFO: Got endpoints: latency-svc-mcvh5 [5.166704457s]
Jan  4 12:38:41.726: INFO: Created: latency-svc-sskcf
Jan  4 12:38:41.905: INFO: Got endpoints: latency-svc-sskcf [5.24394742s]
Jan  4 12:38:42.011: INFO: Created: latency-svc-npktc
Jan  4 12:38:42.172: INFO: Got endpoints: latency-svc-npktc [5.312068144s]
Jan  4 12:38:42.241: INFO: Created: latency-svc-2pjpf
Jan  4 12:38:42.398: INFO: Got endpoints: latency-svc-2pjpf [4.304084236s]
Jan  4 12:38:42.472: INFO: Created: latency-svc-gmqw7
Jan  4 12:38:42.750: INFO: Got endpoints: latency-svc-gmqw7 [4.077385661s]
Jan  4 12:38:42.763: INFO: Created: latency-svc-q4g6n
Jan  4 12:38:42.778: INFO: Got endpoints: latency-svc-q4g6n [3.20274815s]
Jan  4 12:38:43.022: INFO: Created: latency-svc-26fwr
Jan  4 12:38:43.049: INFO: Got endpoints: latency-svc-26fwr [3.463102033s]
Jan  4 12:38:43.321: INFO: Created: latency-svc-7xjsc
Jan  4 12:38:43.336: INFO: Got endpoints: latency-svc-7xjsc [3.451061777s]
Jan  4 12:38:43.520: INFO: Created: latency-svc-fnpww
Jan  4 12:38:43.548: INFO: Got endpoints: latency-svc-fnpww [3.456249595s]
Jan  4 12:38:43.760: INFO: Created: latency-svc-hjjb7
Jan  4 12:38:43.811: INFO: Got endpoints: latency-svc-hjjb7 [3.554091435s]
Jan  4 12:38:43.982: INFO: Created: latency-svc-k82rf
Jan  4 12:38:44.003: INFO: Got endpoints: latency-svc-k82rf [3.546194447s]
Jan  4 12:38:44.175: INFO: Created: latency-svc-m46ml
Jan  4 12:38:44.201: INFO: Got endpoints: latency-svc-m46ml [3.45573392s]
Jan  4 12:38:44.386: INFO: Created: latency-svc-bb5xf
Jan  4 12:38:44.401: INFO: Got endpoints: latency-svc-bb5xf [3.368105463s]
Jan  4 12:38:44.484: INFO: Created: latency-svc-9gmz4
Jan  4 12:38:44.633: INFO: Got endpoints: latency-svc-9gmz4 [3.545976951s]
Jan  4 12:38:44.696: INFO: Created: latency-svc-fpr69
Jan  4 12:38:44.870: INFO: Got endpoints: latency-svc-fpr69 [3.483463948s]
Jan  4 12:38:44.942: INFO: Created: latency-svc-z2f8n
Jan  4 12:38:45.091: INFO: Got endpoints: latency-svc-z2f8n [3.452376025s]
Jan  4 12:38:45.109: INFO: Created: latency-svc-pfkp5
Jan  4 12:38:45.126: INFO: Got endpoints: latency-svc-pfkp5 [3.220824982s]
Jan  4 12:38:45.455: INFO: Created: latency-svc-lb7zw
Jan  4 12:38:45.736: INFO: Got endpoints: latency-svc-lb7zw [3.564654194s]
Jan  4 12:38:45.772: INFO: Created: latency-svc-mf8d5
Jan  4 12:38:45.853: INFO: Got endpoints: latency-svc-mf8d5 [3.45450249s]
Jan  4 12:38:46.254: INFO: Created: latency-svc-kl5tw
Jan  4 12:38:46.255: INFO: Got endpoints: latency-svc-kl5tw [518.197401ms]
Jan  4 12:38:46.447: INFO: Created: latency-svc-2v66g
Jan  4 12:38:46.463: INFO: Got endpoints: latency-svc-2v66g [3.68487074s]
Jan  4 12:38:46.565: INFO: Created: latency-svc-gkp6w
Jan  4 12:38:46.688: INFO: Got endpoints: latency-svc-gkp6w [3.638925079s]
Jan  4 12:38:46.696: INFO: Created: latency-svc-lb4g5
Jan  4 12:38:46.713: INFO: Got endpoints: latency-svc-lb4g5 [3.376849028s]
Jan  4 12:38:46.819: INFO: Created: latency-svc-mqbrg
Jan  4 12:38:46.824: INFO: Got endpoints: latency-svc-mqbrg [3.27512261s]
Jan  4 12:38:46.868: INFO: Created: latency-svc-m6bfd
Jan  4 12:38:46.896: INFO: Got endpoints: latency-svc-m6bfd [3.084783209s]
Jan  4 12:38:47.053: INFO: Created: latency-svc-c5252
Jan  4 12:38:47.053: INFO: Got endpoints: latency-svc-c5252 [3.049885321s]
Jan  4 12:38:47.087: INFO: Created: latency-svc-q4th8
Jan  4 12:38:47.099: INFO: Got endpoints: latency-svc-q4th8 [2.89794921s]
Jan  4 12:38:47.207: INFO: Created: latency-svc-26fgp
Jan  4 12:38:47.258: INFO: Got endpoints: latency-svc-26fgp [2.856435203s]
Jan  4 12:38:47.293: INFO: Created: latency-svc-f4c7q
Jan  4 12:38:47.401: INFO: Got endpoints: latency-svc-f4c7q [2.767837591s]
Jan  4 12:38:47.424: INFO: Created: latency-svc-ddpv4
Jan  4 12:38:47.454: INFO: Got endpoints: latency-svc-ddpv4 [2.583033731s]
Jan  4 12:38:47.489: INFO: Created: latency-svc-gmdg7
Jan  4 12:38:47.505: INFO: Got endpoints: latency-svc-gmdg7 [2.413828832s]
Jan  4 12:38:47.672: INFO: Created: latency-svc-xz8kq
Jan  4 12:38:47.784: INFO: Got endpoints: latency-svc-xz8kq [2.658192014s]
Jan  4 12:38:47.784: INFO: Created: latency-svc-tg4p7
Jan  4 12:38:47.823: INFO: Got endpoints: latency-svc-tg4p7 [5.072466667s]
Jan  4 12:38:47.955: INFO: Created: latency-svc-jfvfk
Jan  4 12:38:47.965: INFO: Got endpoints: latency-svc-jfvfk [2.112185322s]
Jan  4 12:38:47.973: INFO: Created: latency-svc-hwdmp
Jan  4 12:38:47.988: INFO: Got endpoints: latency-svc-hwdmp [1.733319184s]
Jan  4 12:38:48.027: INFO: Created: latency-svc-kj8d5
Jan  4 12:38:48.045: INFO: Got endpoints: latency-svc-kj8d5 [1.58197651s]
Jan  4 12:38:48.187: INFO: Created: latency-svc-nwlvj
Jan  4 12:38:48.211: INFO: Got endpoints: latency-svc-nwlvj [1.522129974s]
Jan  4 12:38:48.351: INFO: Created: latency-svc-w69tt
Jan  4 12:38:48.360: INFO: Got endpoints: latency-svc-w69tt [1.646713949s]
Jan  4 12:38:48.439: INFO: Created: latency-svc-r5nqp
Jan  4 12:38:48.547: INFO: Got endpoints: latency-svc-r5nqp [1.722973903s]
Jan  4 12:38:48.579: INFO: Created: latency-svc-kv92s
Jan  4 12:38:48.599: INFO: Got endpoints: latency-svc-kv92s [1.703000179s]
Jan  4 12:38:48.754: INFO: Created: latency-svc-hhq7k
Jan  4 12:38:48.755: INFO: Got endpoints: latency-svc-hhq7k [1.701510441s]
Jan  4 12:38:48.795: INFO: Created: latency-svc-mbcmj
Jan  4 12:38:48.807: INFO: Got endpoints: latency-svc-mbcmj [1.708061614s]
Jan  4 12:38:48.976: INFO: Created: latency-svc-2jrk7
Jan  4 12:38:48.989: INFO: Got endpoints: latency-svc-2jrk7 [1.730859703s]
Jan  4 12:38:49.233: INFO: Created: latency-svc-xr79x
Jan  4 12:38:49.259: INFO: Got endpoints: latency-svc-xr79x [1.857284127s]
Jan  4 12:38:49.440: INFO: Created: latency-svc-bsfpc
Jan  4 12:38:49.461: INFO: Got endpoints: latency-svc-bsfpc [2.007321662s]
Jan  4 12:38:49.655: INFO: Created: latency-svc-fkfqk
Jan  4 12:38:49.933: INFO: Created: latency-svc-v6d8b
Jan  4 12:38:49.985: INFO: Got endpoints: latency-svc-v6d8b [2.200704763s]
Jan  4 12:38:49.994: INFO: Got endpoints: latency-svc-fkfqk [2.48828389s]
Jan  4 12:38:50.162: INFO: Created: latency-svc-9n4qf
Jan  4 12:38:50.176: INFO: Got endpoints: latency-svc-9n4qf [2.352188553s]
Jan  4 12:38:50.349: INFO: Created: latency-svc-2tjdn
Jan  4 12:38:50.368: INFO: Got endpoints: latency-svc-2tjdn [2.402098172s]
Jan  4 12:38:50.510: INFO: Created: latency-svc-sw747
Jan  4 12:38:50.570: INFO: Got endpoints: latency-svc-sw747 [2.581836245s]
Jan  4 12:38:50.730: INFO: Created: latency-svc-kd8wf
Jan  4 12:38:50.747: INFO: Got endpoints: latency-svc-kd8wf [2.701735851s]
Jan  4 12:38:50.747: INFO: Latencies: [217.903595ms 301.91742ms 445.102753ms 518.197401ms 647.571961ms 1.101589817s 1.123641969s 1.376945598s 1.522129974s 1.58197651s 1.588570545s 1.646713949s 1.701510441s 1.703000179s 1.708061614s 1.722973903s 1.730859703s 1.733319184s 1.828541262s 1.857284127s 1.997785396s 2.007321662s 2.008578654s 2.015494201s 2.045771153s 2.096129529s 2.112185322s 2.125587426s 2.139162566s 2.148907318s 2.158610589s 2.176938019s 2.191094718s 2.200704763s 2.238515257s 2.269869816s 2.282385418s 2.2845191s 2.285520847s 2.292495075s 2.295484069s 2.29884987s 2.325441759s 2.336679675s 2.352188553s 2.372737583s 2.37395368s 2.37550468s 2.387448403s 2.398098565s 2.402098172s 2.404878275s 2.413828832s 2.437530064s 2.445893803s 2.461480437s 2.48828389s 2.491911307s 2.502683814s 2.504561948s 2.510724132s 2.516047516s 2.518549915s 2.533122155s 2.540361873s 2.581836245s 2.583033731s 2.591499479s 2.595978186s 2.597750257s 2.599498938s 2.602494021s 2.609469854s 2.625692592s 2.636304296s 2.642112038s 2.652760478s 2.657441247s 2.658192014s 2.659662128s 2.678328638s 2.684365677s 2.690121116s 2.693661207s 2.701735851s 2.709551568s 2.717302491s 2.757459697s 2.767837591s 2.773732186s 2.789932828s 2.809820645s 2.821998991s 2.824007688s 2.833091121s 2.841242259s 2.843650564s 2.856435203s 2.86095069s 2.867962518s 2.89794921s 2.935155173s 2.938330589s 2.950006463s 2.977480759s 2.980752515s 2.980789785s 2.993394765s 2.998466145s 2.998845167s 3.049885321s 3.084783209s 3.144928369s 3.163026922s 3.175613631s 3.20274815s 3.220824982s 3.224452444s 3.226165965s 3.234410198s 3.259138507s 3.26101104s 3.27512261s 3.28356215s 3.320473652s 3.327064096s 3.368105463s 3.373998254s 3.376849028s 3.400044315s 3.40883304s 3.41960862s 3.431289224s 3.434040095s 3.442952051s 3.451061777s 3.452376025s 3.45450249s 3.45573392s 3.456249595s 3.463102033s 3.467576885s 3.483463948s 3.53579467s 3.539766055s 3.545976951s 3.546194447s 3.554091435s 3.564654194s 3.638925079s 3.67004288s 3.68487074s 3.746332832s 3.815793714s 3.867191581s 3.887154099s 3.919245346s 3.931593273s 3.967596458s 3.978316617s 4.006593156s 4.026444974s 4.031462485s 4.041925094s 4.042589875s 4.076865425s 4.077385661s 4.079628454s 4.087206512s 4.096977598s 4.102602709s 4.132121824s 4.144295991s 4.171564936s 4.209620858s 4.210989729s 4.222668336s 4.230870992s 4.24730872s 4.286650567s 4.29947098s 4.304084236s 4.304469684s 4.443453179s 4.454295497s 4.45439593s 4.828317867s 5.0158263s 5.02246372s 5.033636213s 5.055879329s 5.063761184s 5.072466667s 5.099015739s 5.153829817s 5.155020847s 5.166704457s 5.24394742s 5.251253937s 5.312068144s]
Jan  4 12:38:50.748: INFO: 50 %ile: 2.89794921s
Jan  4 12:38:50.748: INFO: 90 %ile: 4.29947098s
Jan  4 12:38:50.748: INFO: 99 %ile: 5.251253937s
Jan  4 12:38:50.748: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:38:50.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-s9l94" for this suite.
Jan  4 12:40:06.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:40:07.106: INFO: namespace: e2e-tests-svc-latency-s9l94, resource: bindings, ignored listing per whitelist
Jan  4 12:40:07.122: INFO: namespace e2e-tests-svc-latency-s9l94 deletion completed in 1m16.35461728s

• [SLOW TEST:130.153 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:40:07.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-569b79b1-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:40:07.377: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-zd9hh" to be "success or failure"
Jan  4 12:40:07.385: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110013ms
Jan  4 12:40:09.895: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518681423s
Jan  4 12:40:11.913: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.536409969s
Jan  4 12:40:13.956: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579440135s
Jan  4 12:40:15.969: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.592399723s
Jan  4 12:40:17.981: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604488779s
Jan  4 12:40:20.013: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.636300678s
STEP: Saw pod success
Jan  4 12:40:20.013: INFO: Pod "pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:40:20.022: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 12:40:20.644: INFO: Waiting for pod pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:40:21.125: INFO: Pod pod-projected-configmaps-569f11d5-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:40:21.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zd9hh" for this suite.
Jan  4 12:40:29.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:40:29.273: INFO: namespace: e2e-tests-projected-zd9hh, resource: bindings, ignored listing per whitelist
Jan  4 12:40:29.367: INFO: namespace e2e-tests-projected-zd9hh deletion completed in 8.228249707s

• [SLOW TEST:22.246 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:40:29.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006
Jan  4 12:40:30.051: INFO: Pod name my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006: Found 0 pods out of 1
Jan  4 12:40:35.741: INFO: Pod name my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006: Found 1 pods out of 1
Jan  4 12:40:35.741: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006" are running
Jan  4 12:40:41.773: INFO: Pod "my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006-hd5z5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:40:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:40:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:40:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 12:40:30 +0000 UTC Reason: Message:}])
Jan  4 12:40:41.774: INFO: Trying to dial the pod
Jan  4 12:40:47.000: INFO: Controller my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006: Got expected result from replica 1 [my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006-hd5z5]: "my-hostname-basic-640c7ec4-2eef-11ea-9996-0242ac110006-hd5z5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:40:47.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-km8h5" for this suite.
Jan  4 12:40:55.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:40:55.188: INFO: namespace: e2e-tests-replication-controller-km8h5, resource: bindings, ignored listing per whitelist
Jan  4 12:40:55.256: INFO: namespace e2e-tests-replication-controller-km8h5 deletion completed in 8.250989507s

• [SLOW TEST:25.888 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:40:55.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 12:40:55.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nlqnm'
Jan  4 12:40:58.717: INFO: stderr: ""
Jan  4 12:40:58.717: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  4 12:40:58.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nlqnm'
Jan  4 12:40:59.083: INFO: stderr: ""
Jan  4 12:40:59.083: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:40:59.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nlqnm" for this suite.
Jan  4 12:41:07.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:41:07.312: INFO: namespace: e2e-tests-kubectl-nlqnm, resource: bindings, ignored listing per whitelist
Jan  4 12:41:07.409: INFO: namespace e2e-tests-kubectl-nlqnm deletion completed in 8.313016871s

• [SLOW TEST:12.153 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:41:07.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-f47x5/configmap-test-7a998b1c-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:41:07.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-f47x5" to be "success or failure"
Jan  4 12:41:07.859: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 124.443291ms
Jan  4 12:41:09.879: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143963052s
Jan  4 12:41:11.913: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177780511s
Jan  4 12:41:14.856: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120824975s
Jan  4 12:41:16.880: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.145225404s
Jan  4 12:41:18.913: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.178209328s
Jan  4 12:41:20.951: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.216376607s
STEP: Saw pod success
Jan  4 12:41:20.951: INFO: Pod "pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:41:20.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006 container env-test: 
STEP: delete the pod
Jan  4 12:41:21.128: INFO: Waiting for pod pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:41:21.139: INFO: Pod pod-configmaps-7a9aafe0-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:41:21.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f47x5" for this suite.
Jan  4 12:41:27.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:41:27.459: INFO: namespace: e2e-tests-configmap-f47x5, resource: bindings, ignored listing per whitelist
Jan  4 12:41:27.469: INFO: namespace e2e-tests-configmap-f47x5 deletion completed in 6.320061664s

• [SLOW TEST:20.059 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:41:27.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-8677eb2d-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:41:27.648: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-4z5cv" to be "success or failure"
Jan  4 12:41:27.673: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 25.394895ms
Jan  4 12:41:29.686: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038254642s
Jan  4 12:41:31.717: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069149939s
Jan  4 12:41:33.872: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22414667s
Jan  4 12:41:35.922: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.274414018s
Jan  4 12:41:37.937: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.288770427s
STEP: Saw pod success
Jan  4 12:41:37.937: INFO: Pod "pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:41:37.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 12:41:38.141: INFO: Waiting for pod pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:41:38.151: INFO: Pod pod-projected-secrets-8678a892-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:41:38.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4z5cv" for this suite.
Jan  4 12:41:45.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:41:45.361: INFO: namespace: e2e-tests-projected-4z5cv, resource: bindings, ignored listing per whitelist
Jan  4 12:41:45.428: INFO: namespace e2e-tests-projected-4z5cv deletion completed in 7.269757274s

• [SLOW TEST:17.959 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:41:45.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-9134b03a-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:41:45.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-95qb6" to be "success or failure"
Jan  4 12:41:45.690: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.209264ms
Jan  4 12:41:47.778: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099241574s
Jan  4 12:41:49.805: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126499573s
Jan  4 12:41:51.863: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183681108s
Jan  4 12:41:54.217: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538251122s
Jan  4 12:41:56.231: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.551974646s
STEP: Saw pod success
Jan  4 12:41:56.231: INFO: Pod "pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:41:56.240: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 12:41:56.558: INFO: Waiting for pod pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:41:56.577: INFO: Pod pod-projected-configmaps-91362d9b-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:41:56.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-95qb6" for this suite.
Jan  4 12:42:02.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:42:02.911: INFO: namespace: e2e-tests-projected-95qb6, resource: bindings, ignored listing per whitelist
Jan  4 12:42:02.914: INFO: namespace e2e-tests-projected-95qb6 deletion completed in 6.326026051s

• [SLOW TEST:17.485 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:42:02.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9bac9550-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:42:03.371: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-94hxm" to be "success or failure"
Jan  4 12:42:03.389: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.268563ms
Jan  4 12:42:05.847: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476104223s
Jan  4 12:42:07.883: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.511821768s
Jan  4 12:42:09.906: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535120141s
Jan  4 12:42:11.973: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601724181s
Jan  4 12:42:13.999: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.627732411s
STEP: Saw pod success
Jan  4 12:42:13.999: INFO: Pod "pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:42:14.004: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 12:42:14.841: INFO: Waiting for pod pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:42:14.864: INFO: Pod pod-projected-configmaps-9bae4889-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:42:14.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-94hxm" for this suite.
Jan  4 12:42:20.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:42:20.970: INFO: namespace: e2e-tests-projected-94hxm, resource: bindings, ignored listing per whitelist
Jan  4 12:42:21.095: INFO: namespace e2e-tests-projected-94hxm deletion completed in 6.217977843s

• [SLOW TEST:18.181 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:42:21.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-a6752e1f-2eef-11ea-9996-0242ac110006
STEP: Creating secret with name s-test-opt-upd-a6752ef6-2eef-11ea-9996-0242ac110006
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a6752e1f-2eef-11ea-9996-0242ac110006
STEP: Updating secret s-test-opt-upd-a6752ef6-2eef-11ea-9996-0242ac110006
STEP: Creating secret with name s-test-opt-create-a6752f25-2eef-11ea-9996-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:42:37.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-djx7r" for this suite.
Jan  4 12:43:01.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:43:01.760: INFO: namespace: e2e-tests-secrets-djx7r, resource: bindings, ignored listing per whitelist
Jan  4 12:43:01.904: INFO: namespace e2e-tests-secrets-djx7r deletion completed in 24.274725223s

• [SLOW TEST:40.809 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:43:01.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 12:43:02.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-sz27q'
Jan  4 12:43:02.219: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 12:43:02.219: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  4 12:43:04.386: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-tzbn4]
Jan  4 12:43:04.386: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-tzbn4" in namespace "e2e-tests-kubectl-sz27q" to be "running and ready"
Jan  4 12:43:04.394: INFO: Pod "e2e-test-nginx-rc-tzbn4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.843774ms
Jan  4 12:43:06.419: INFO: Pod "e2e-test-nginx-rc-tzbn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032942133s
Jan  4 12:43:08.435: INFO: Pod "e2e-test-nginx-rc-tzbn4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048673006s
Jan  4 12:43:10.455: INFO: Pod "e2e-test-nginx-rc-tzbn4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06843439s
Jan  4 12:43:12.481: INFO: Pod "e2e-test-nginx-rc-tzbn4": Phase="Running", Reason="", readiness=true. Elapsed: 8.095287967s
Jan  4 12:43:12.482: INFO: Pod "e2e-test-nginx-rc-tzbn4" satisfied condition "running and ready"
Jan  4 12:43:12.482: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-tzbn4]
Jan  4 12:43:12.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sz27q'
Jan  4 12:43:12.783: INFO: stderr: ""
Jan  4 12:43:12.783: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  4 12:43:12.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sz27q'
Jan  4 12:43:12.922: INFO: stderr: ""
Jan  4 12:43:12.922: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:43:12.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sz27q" for this suite.
Jan  4 12:43:37.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:43:37.074: INFO: namespace: e2e-tests-kubectl-sz27q, resource: bindings, ignored listing per whitelist
Jan  4 12:43:37.219: INFO: namespace e2e-tests-kubectl-sz27q deletion completed in 24.221271476s

• [SLOW TEST:35.315 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:43:37.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d3d96282-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:43:37.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-rcx64" to be "success or failure"
Jan  4 12:43:37.552: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.855674ms
Jan  4 12:43:39.609: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07310553s
Jan  4 12:43:41.629: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092602412s
Jan  4 12:43:43.944: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408269752s
Jan  4 12:43:45.965: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42890073s
Jan  4 12:43:48.046: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.509797503s
STEP: Saw pod success
Jan  4 12:43:48.046: INFO: Pod "pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:43:48.054: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan  4 12:43:48.376: INFO: Waiting for pod pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:43:48.389: INFO: Pod pod-configmaps-d3e3be8e-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:43:48.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rcx64" for this suite.
Jan  4 12:43:56.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:43:56.717: INFO: namespace: e2e-tests-configmap-rcx64, resource: bindings, ignored listing per whitelist
Jan  4 12:43:56.768: INFO: namespace e2e-tests-configmap-rcx64 deletion completed in 8.369930404s

• [SLOW TEST:19.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:43:56.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  4 12:43:56.985: INFO: Waiting up to 5m0s for pod "pod-df78d76d-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-2bbf8" to be "success or failure"
Jan  4 12:43:57.001: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.384807ms
Jan  4 12:43:59.130: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144426632s
Jan  4 12:44:01.178: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192824516s
Jan  4 12:44:03.197: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211198731s
Jan  4 12:44:05.211: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.225087882s
Jan  4 12:44:07.225: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 10.239650989s
Jan  4 12:44:09.241: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.255010087s
STEP: Saw pod success
Jan  4 12:44:09.241: INFO: Pod "pod-df78d76d-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:44:09.247: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-df78d76d-2eef-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 12:44:09.380: INFO: Waiting for pod pod-df78d76d-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:44:09.388: INFO: Pod pod-df78d76d-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:44:09.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2bbf8" for this suite.
Jan  4 12:44:15.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:44:15.615: INFO: namespace: e2e-tests-emptydir-2bbf8, resource: bindings, ignored listing per whitelist
Jan  4 12:44:15.728: INFO: namespace e2e-tests-emptydir-2bbf8 deletion completed in 6.287077206s

• [SLOW TEST:18.959 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:44:15.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-ead81771-2eef-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:44:16.106: INFO: Waiting up to 5m0s for pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-4pv82" to be "success or failure"
Jan  4 12:44:16.140: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 33.188931ms
Jan  4 12:44:18.152: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045667283s
Jan  4 12:44:20.170: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06319198s
Jan  4 12:44:22.285: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178109979s
Jan  4 12:44:24.302: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195040031s
Jan  4 12:44:27.236: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.129479108s
Jan  4 12:44:29.256: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.149867579s
STEP: Saw pod success
Jan  4 12:44:29.257: INFO: Pod "pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:44:29.272: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan  4 12:44:29.471: INFO: Waiting for pod pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:44:29.491: INFO: Pod pod-configmaps-eadf8410-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:44:29.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4pv82" for this suite.
Jan  4 12:44:35.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:44:35.768: INFO: namespace: e2e-tests-configmap-4pv82, resource: bindings, ignored listing per whitelist
Jan  4 12:44:35.775: INFO: namespace e2e-tests-configmap-4pv82 deletion completed in 6.268566043s

• [SLOW TEST:20.047 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:44:35.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  4 12:44:36.078: INFO: Waiting up to 5m0s for pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-6bqf9" to be "success or failure"
Jan  4 12:44:36.304: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 226.308144ms
Jan  4 12:44:38.454: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376220093s
Jan  4 12:44:40.532: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454144839s
Jan  4 12:44:42.654: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575951086s
Jan  4 12:44:45.153: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.075460694s
Jan  4 12:44:47.774: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.695936788s
STEP: Saw pod success
Jan  4 12:44:47.774: INFO: Pod "downward-api-f6c52034-2eef-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:44:48.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f6c52034-2eef-11ea-9996-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan  4 12:44:48.371: INFO: Waiting for pod downward-api-f6c52034-2eef-11ea-9996-0242ac110006 to disappear
Jan  4 12:44:48.390: INFO: Pod downward-api-f6c52034-2eef-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:44:48.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6bqf9" for this suite.
Jan  4 12:44:54.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:44:54.676: INFO: namespace: e2e-tests-downward-api-6bqf9, resource: bindings, ignored listing per whitelist
Jan  4 12:44:54.770: INFO: namespace e2e-tests-downward-api-6bqf9 deletion completed in 6.373968007s

• [SLOW TEST:18.995 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:44:54.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  4 12:44:55.019: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 12:44:55.087: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 12:44:55.099: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  4 12:44:55.121: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  4 12:44:55.121: INFO: 	Container weave ready: true, restart count 0
Jan  4 12:44:55.121: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 12:44:55.121: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 12:44:55.121: INFO: 	Container coredns ready: true, restart count 0
Jan  4 12:44:55.121: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:44:55.121: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:44:55.121: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 12:44:55.121: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 12:44:55.121: INFO: 	Container coredns ready: true, restart count 0
Jan  4 12:44:55.121: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  4 12:44:55.121: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 12:44:55.121: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e6afb139b276cb], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:44:56.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-hjxv4" for this suite.
Jan  4 12:45:02.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:45:02.595: INFO: namespace: e2e-tests-sched-pred-hjxv4, resource: bindings, ignored listing per whitelist
Jan  4 12:45:02.625: INFO: namespace e2e-tests-sched-pred-hjxv4 deletion completed in 6.441564938s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.854 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:45:02.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-clpl2/configmap-test-06ede7b7-2ef0-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 12:45:03.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-clpl2" to be "success or failure"
Jan  4 12:45:03.361: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.669816ms
Jan  4 12:45:05.374: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031989266s
Jan  4 12:45:07.389: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04730071s
Jan  4 12:45:09.508: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165637422s
Jan  4 12:45:11.541: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198704333s
Jan  4 12:45:13.555: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213402147s
Jan  4 12:45:15.575: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.233073942s
STEP: Saw pod success
Jan  4 12:45:15.575: INFO: Pod "pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 12:45:15.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006 container env-test: 
STEP: delete the pod
Jan  4 12:45:15.730: INFO: Waiting for pod pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006 to disappear
Jan  4 12:45:15.759: INFO: Pod pod-configmaps-06eee45d-2ef0-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:45:15.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-clpl2" for this suite.
Jan  4 12:45:21.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:45:21.989: INFO: namespace: e2e-tests-configmap-clpl2, resource: bindings, ignored listing per whitelist
Jan  4 12:45:22.092: INFO: namespace e2e-tests-configmap-clpl2 deletion completed in 6.321170003s

• [SLOW TEST:19.466 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:45:22.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-46vkr
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-46vkr
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-46vkr
Jan  4 12:45:22.630: INFO: Found 0 stateful pods, waiting for 1
Jan  4 12:45:32.675: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 12:45:42.645: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  4 12:45:42.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:45:43.477: INFO: stderr: "I0104 12:45:42.936157    1216 log.go:172] (0xc00070e2c0) (0xc000639400) Create stream\nI0104 12:45:42.936397    1216 log.go:172] (0xc00070e2c0) (0xc000639400) Stream added, broadcasting: 1\nI0104 12:45:42.945457    1216 log.go:172] (0xc00070e2c0) Reply frame received for 1\nI0104 12:45:42.945500    1216 log.go:172] (0xc00070e2c0) (0xc0006394a0) Create stream\nI0104 12:45:42.945509    1216 log.go:172] (0xc00070e2c0) (0xc0006394a0) Stream added, broadcasting: 3\nI0104 12:45:42.947192    1216 log.go:172] (0xc00070e2c0) Reply frame received for 3\nI0104 12:45:42.947291    1216 log.go:172] (0xc00070e2c0) (0xc00065a000) Create stream\nI0104 12:45:42.947307    1216 log.go:172] (0xc00070e2c0) (0xc00065a000) Stream added, broadcasting: 5\nI0104 12:45:42.949233    1216 log.go:172] (0xc00070e2c0) Reply frame received for 5\nI0104 12:45:43.254145    1216 log.go:172] (0xc00070e2c0) Data frame received for 3\nI0104 12:45:43.254194    1216 log.go:172] (0xc0006394a0) (3) Data frame handling\nI0104 12:45:43.254225    1216 log.go:172] (0xc0006394a0) (3) Data frame sent\nI0104 12:45:43.468872    1216 log.go:172] (0xc00070e2c0) Data frame received for 1\nI0104 12:45:43.468942    1216 log.go:172] (0xc00070e2c0) (0xc00065a000) Stream removed, broadcasting: 5\nI0104 12:45:43.468991    1216 log.go:172] (0xc000639400) (1) Data frame handling\nI0104 12:45:43.469019    1216 log.go:172] (0xc000639400) (1) Data frame sent\nI0104 12:45:43.469033    1216 log.go:172] (0xc00070e2c0) (0xc000639400) Stream removed, broadcasting: 1\nI0104 12:45:43.469182    1216 log.go:172] (0xc00070e2c0) (0xc0006394a0) Stream removed, broadcasting: 3\nI0104 12:45:43.469224    1216 log.go:172] (0xc00070e2c0) Go away received\nI0104 12:45:43.469307    1216 log.go:172] (0xc00070e2c0) (0xc000639400) Stream removed, broadcasting: 1\nI0104 12:45:43.469353    1216 log.go:172] (0xc00070e2c0) (0xc0006394a0) Stream removed, broadcasting: 3\nI0104 12:45:43.469364    1216 log.go:172] (0xc00070e2c0) (0xc00065a000) Stream removed, broadcasting: 5\n"
Jan  4 12:45:43.477: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:45:43.477: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:45:43.510: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  4 12:45:53.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:45:53.531: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:45:53.577: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:45:53.577: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:45:53.577: INFO: 
Jan  4 12:45:53.577: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  4 12:45:54.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984950366s
Jan  4 12:45:56.300: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.962691701s
Jan  4 12:45:57.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.262083795s
Jan  4 12:45:58.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.109114589s
Jan  4 12:45:59.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.087592675s
Jan  4 12:46:00.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.051978085s
Jan  4 12:46:03.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.003446848s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-46vkr
Jan  4 12:46:05.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:08.358: INFO: stderr: "I0104 12:46:07.834316    1237 log.go:172] (0xc00070e0b0) (0xc000736640) Create stream\nI0104 12:46:07.834786    1237 log.go:172] (0xc00070e0b0) (0xc000736640) Stream added, broadcasting: 1\nI0104 12:46:07.860690    1237 log.go:172] (0xc00070e0b0) Reply frame received for 1\nI0104 12:46:07.860812    1237 log.go:172] (0xc00070e0b0) (0xc0005fed20) Create stream\nI0104 12:46:07.860854    1237 log.go:172] (0xc00070e0b0) (0xc0005fed20) Stream added, broadcasting: 3\nI0104 12:46:07.864537    1237 log.go:172] (0xc00070e0b0) Reply frame received for 3\nI0104 12:46:07.864634    1237 log.go:172] (0xc00070e0b0) (0xc000402000) Create stream\nI0104 12:46:07.864682    1237 log.go:172] (0xc00070e0b0) (0xc000402000) Stream added, broadcasting: 5\nI0104 12:46:07.867750    1237 log.go:172] (0xc00070e0b0) Reply frame received for 5\nI0104 12:46:08.082772    1237 log.go:172] (0xc00070e0b0) Data frame received for 3\nI0104 12:46:08.082838    1237 log.go:172] (0xc0005fed20) (3) Data frame handling\nI0104 12:46:08.082855    1237 log.go:172] (0xc0005fed20) (3) Data frame sent\nI0104 12:46:08.350845    1237 log.go:172] (0xc00070e0b0) (0xc0005fed20) Stream removed, broadcasting: 3\nI0104 12:46:08.350966    1237 log.go:172] (0xc00070e0b0) Data frame received for 1\nI0104 12:46:08.351006    1237 log.go:172] (0xc00070e0b0) (0xc000402000) Stream removed, broadcasting: 5\nI0104 12:46:08.351084    1237 log.go:172] (0xc000736640) (1) Data frame handling\nI0104 12:46:08.351113    1237 log.go:172] (0xc000736640) (1) Data frame sent\nI0104 12:46:08.351122    1237 log.go:172] (0xc00070e0b0) (0xc000736640) Stream removed, broadcasting: 1\nI0104 12:46:08.351132    1237 log.go:172] (0xc00070e0b0) Go away received\nI0104 12:46:08.351374    1237 log.go:172] (0xc00070e0b0) (0xc000736640) Stream removed, broadcasting: 1\nI0104 12:46:08.351401    1237 log.go:172] (0xc00070e0b0) (0xc0005fed20) Stream removed, broadcasting: 3\nI0104 12:46:08.351412    1237 log.go:172] (0xc00070e0b0) (0xc000402000) Stream removed, broadcasting: 5\n"
Jan  4 12:46:08.358: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:46:08.358: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:46:08.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:09.064: INFO: rc: 1
Jan  4 12:46:09.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001fde450 exit status 1   true [0xc00118b700 0xc00118b718 0xc00118b730] [0xc00118b700 0xc00118b718 0xc00118b730] [0xc00118b710 0xc00118b728] [0x935700 0x935700] 0xc001d31320 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  4 12:46:19.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:19.579: INFO: stderr: "I0104 12:46:19.267462    1282 log.go:172] (0xc00015c6e0) (0xc000772640) Create stream\nI0104 12:46:19.267667    1282 log.go:172] (0xc00015c6e0) (0xc000772640) Stream added, broadcasting: 1\nI0104 12:46:19.274438    1282 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0104 12:46:19.274487    1282 log.go:172] (0xc00015c6e0) (0xc0006d8d20) Create stream\nI0104 12:46:19.274497    1282 log.go:172] (0xc00015c6e0) (0xc0006d8d20) Stream added, broadcasting: 3\nI0104 12:46:19.278038    1282 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0104 12:46:19.278068    1282 log.go:172] (0xc00015c6e0) (0xc000694000) Create stream\nI0104 12:46:19.278076    1282 log.go:172] (0xc00015c6e0) (0xc000694000) Stream added, broadcasting: 5\nI0104 12:46:19.279553    1282 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0104 12:46:19.418624    1282 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0104 12:46:19.418675    1282 log.go:172] (0xc000694000) (5) Data frame handling\nI0104 12:46:19.418696    1282 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0104 12:46:19.418725    1282 log.go:172] (0xc0006d8d20) (3) Data frame handling\nI0104 12:46:19.418767    1282 log.go:172] (0xc0006d8d20) (3) Data frame sent\nI0104 12:46:19.418821    1282 log.go:172] (0xc000694000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0104 12:46:19.570651    1282 log.go:172] (0xc00015c6e0) (0xc0006d8d20) Stream removed, broadcasting: 3\nI0104 12:46:19.570758    1282 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0104 12:46:19.570791    1282 log.go:172] (0xc000772640) (1) Data frame handling\nI0104 12:46:19.570822    1282 log.go:172] (0xc000772640) (1) Data frame sent\nI0104 12:46:19.570842    1282 log.go:172] (0xc00015c6e0) (0xc000694000) Stream removed, broadcasting: 5\nI0104 12:46:19.570943    1282 log.go:172] (0xc00015c6e0) (0xc000772640) Stream removed, broadcasting: 1\nI0104 12:46:19.570979    1282 log.go:172] (0xc00015c6e0) Go away received\nI0104 12:46:19.571366    1282 log.go:172] (0xc00015c6e0) (0xc000772640) Stream removed, broadcasting: 1\nI0104 12:46:19.571404    1282 log.go:172] (0xc00015c6e0) (0xc0006d8d20) Stream removed, broadcasting: 3\nI0104 12:46:19.571420    1282 log.go:172] (0xc00015c6e0) (0xc000694000) Stream removed, broadcasting: 5\n"
Jan  4 12:46:19.579: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:46:19.579: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:46:19.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:20.117: INFO: stderr: "I0104 12:46:19.779823    1305 log.go:172] (0xc000742370) (0xc00066d2c0) Create stream\nI0104 12:46:19.779969    1305 log.go:172] (0xc000742370) (0xc00066d2c0) Stream added, broadcasting: 1\nI0104 12:46:19.784452    1305 log.go:172] (0xc000742370) Reply frame received for 1\nI0104 12:46:19.784488    1305 log.go:172] (0xc000742370) (0xc0005de000) Create stream\nI0104 12:46:19.784495    1305 log.go:172] (0xc000742370) (0xc0005de000) Stream added, broadcasting: 3\nI0104 12:46:19.785476    1305 log.go:172] (0xc000742370) Reply frame received for 3\nI0104 12:46:19.785499    1305 log.go:172] (0xc000742370) (0xc000666000) Create stream\nI0104 12:46:19.785508    1305 log.go:172] (0xc000742370) (0xc000666000) Stream added, broadcasting: 5\nI0104 12:46:19.786716    1305 log.go:172] (0xc000742370) Reply frame received for 5\nI0104 12:46:19.917036    1305 log.go:172] (0xc000742370) Data frame received for 5\nI0104 12:46:19.917160    1305 log.go:172] (0xc000666000) (5) Data frame handling\nI0104 12:46:19.917178    1305 log.go:172] (0xc000666000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0104 12:46:19.917198    1305 log.go:172] (0xc000742370) Data frame received for 3\nI0104 12:46:19.917211    1305 log.go:172] (0xc0005de000) (3) Data frame handling\nI0104 12:46:19.917222    1305 log.go:172] (0xc0005de000) (3) Data frame sent\nI0104 12:46:20.111257    1305 log.go:172] (0xc000742370) Data frame received for 1\nI0104 12:46:20.111323    1305 log.go:172] (0xc000742370) (0xc000666000) Stream removed, broadcasting: 5\nI0104 12:46:20.111353    1305 log.go:172] (0xc00066d2c0) (1) Data frame handling\nI0104 12:46:20.111373    1305 log.go:172] (0xc00066d2c0) (1) Data frame sent\nI0104 12:46:20.111516    1305 log.go:172] (0xc000742370) (0xc0005de000) Stream removed, broadcasting: 3\nI0104 12:46:20.111621    1305 log.go:172] (0xc000742370) (0xc00066d2c0) Stream removed, broadcasting: 1\nI0104 12:46:20.111673    1305 log.go:172] (0xc000742370) Go away received\nI0104 12:46:20.111817    1305 log.go:172] (0xc000742370) (0xc00066d2c0) Stream removed, broadcasting: 1\nI0104 12:46:20.111835    1305 log.go:172] (0xc000742370) (0xc0005de000) Stream removed, broadcasting: 3\nI0104 12:46:20.111842    1305 log.go:172] (0xc000742370) (0xc000666000) Stream removed, broadcasting: 5\n"
Jan  4 12:46:20.117: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:46:20.117: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:46:20.141: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:46:20.141: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:46:20.141: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  4 12:46:20.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:46:20.702: INFO: stderr: "I0104 12:46:20.338762    1328 log.go:172] (0xc0008142c0) (0xc0005714a0) Create stream\nI0104 12:46:20.338888    1328 log.go:172] (0xc0008142c0) (0xc0005714a0) Stream added, broadcasting: 1\nI0104 12:46:20.344999    1328 log.go:172] (0xc0008142c0) Reply frame received for 1\nI0104 12:46:20.345034    1328 log.go:172] (0xc0008142c0) (0xc000344000) Create stream\nI0104 12:46:20.345044    1328 log.go:172] (0xc0008142c0) (0xc000344000) Stream added, broadcasting: 3\nI0104 12:46:20.345980    1328 log.go:172] (0xc0008142c0) Reply frame received for 3\nI0104 12:46:20.346001    1328 log.go:172] (0xc0008142c0) (0xc000571540) Create stream\nI0104 12:46:20.346007    1328 log.go:172] (0xc0008142c0) (0xc000571540) Stream added, broadcasting: 5\nI0104 12:46:20.347133    1328 log.go:172] (0xc0008142c0) Reply frame received for 5\nI0104 12:46:20.535017    1328 log.go:172] (0xc0008142c0) Data frame received for 3\nI0104 12:46:20.535147    1328 log.go:172] (0xc000344000) (3) Data frame handling\nI0104 12:46:20.535181    1328 log.go:172] (0xc000344000) (3) Data frame sent\nI0104 12:46:20.687469    1328 log.go:172] (0xc0008142c0) Data frame received for 1\nI0104 12:46:20.687950    1328 log.go:172] (0xc0005714a0) (1) Data frame handling\nI0104 12:46:20.688064    1328 log.go:172] (0xc0005714a0) (1) Data frame sent\nI0104 12:46:20.688132    1328 log.go:172] (0xc0008142c0) (0xc0005714a0) Stream removed, broadcasting: 1\nI0104 12:46:20.689774    1328 log.go:172] (0xc0008142c0) (0xc000344000) Stream removed, broadcasting: 3\nI0104 12:46:20.690521    1328 log.go:172] (0xc0008142c0) (0xc000571540) Stream removed, broadcasting: 5\nI0104 12:46:20.690663    1328 log.go:172] (0xc0008142c0) (0xc0005714a0) Stream removed, broadcasting: 1\nI0104 12:46:20.690705    1328 log.go:172] (0xc0008142c0) (0xc000344000) Stream removed, broadcasting: 3\nI0104 12:46:20.690732    1328 log.go:172] (0xc0008142c0) (0xc000571540) Stream removed, broadcasting: 5\n"
Jan  4 12:46:20.703: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:46:20.703: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:46:20.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:46:21.424: INFO: stderr: "I0104 12:46:20.974021    1349 log.go:172] (0xc00014c630) (0xc0006f8640) Create stream\nI0104 12:46:20.974229    1349 log.go:172] (0xc00014c630) (0xc0006f8640) Stream added, broadcasting: 1\nI0104 12:46:20.982978    1349 log.go:172] (0xc00014c630) Reply frame received for 1\nI0104 12:46:20.983071    1349 log.go:172] (0xc00014c630) (0xc0006f86e0) Create stream\nI0104 12:46:20.983083    1349 log.go:172] (0xc00014c630) (0xc0006f86e0) Stream added, broadcasting: 3\nI0104 12:46:20.985145    1349 log.go:172] (0xc00014c630) Reply frame received for 3\nI0104 12:46:20.985189    1349 log.go:172] (0xc00014c630) (0xc00064cc80) Create stream\nI0104 12:46:20.985202    1349 log.go:172] (0xc00014c630) (0xc00064cc80) Stream added, broadcasting: 5\nI0104 12:46:20.986308    1349 log.go:172] (0xc00014c630) Reply frame received for 5\nI0104 12:46:21.275456    1349 log.go:172] (0xc00014c630) Data frame received for 3\nI0104 12:46:21.275504    1349 log.go:172] (0xc0006f86e0) (3) Data frame handling\nI0104 12:46:21.275519    1349 log.go:172] (0xc0006f86e0) (3) Data frame sent\nI0104 12:46:21.418650    1349 log.go:172] (0xc00014c630) Data frame received for 1\nI0104 12:46:21.418746    1349 log.go:172] (0xc00014c630) (0xc0006f86e0) Stream removed, broadcasting: 3\nI0104 12:46:21.418783    1349 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0104 12:46:21.418793    1349 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0104 12:46:21.418850    1349 log.go:172] (0xc00014c630) (0xc0006f8640) Stream removed, broadcasting: 1\nI0104 12:46:21.418941    1349 log.go:172] (0xc00014c630) (0xc00064cc80) Stream removed, broadcasting: 5\nI0104 12:46:21.418977    1349 log.go:172] (0xc00014c630) Go away received\nI0104 12:46:21.419067    1349 log.go:172] (0xc00014c630) (0xc0006f8640) Stream removed, broadcasting: 1\nI0104 12:46:21.419079    1349 log.go:172] (0xc00014c630) (0xc0006f86e0) Stream removed, broadcasting: 3\nI0104 12:46:21.419084    1349 log.go:172] (0xc00014c630) (0xc00064cc80) Stream removed, broadcasting: 5\n"
Jan  4 12:46:21.425: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:46:21.425: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:46:21.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:46:21.957: INFO: stderr: "I0104 12:46:21.624910    1370 log.go:172] (0xc0003d04d0) (0xc0005cb540) Create stream\nI0104 12:46:21.625088    1370 log.go:172] (0xc0003d04d0) (0xc0005cb540) Stream added, broadcasting: 1\nI0104 12:46:21.628944    1370 log.go:172] (0xc0003d04d0) Reply frame received for 1\nI0104 12:46:21.628978    1370 log.go:172] (0xc0003d04d0) (0xc000706000) Create stream\nI0104 12:46:21.628987    1370 log.go:172] (0xc0003d04d0) (0xc000706000) Stream added, broadcasting: 3\nI0104 12:46:21.630019    1370 log.go:172] (0xc0003d04d0) Reply frame received for 3\nI0104 12:46:21.630076    1370 log.go:172] (0xc0003d04d0) (0xc000646000) Create stream\nI0104 12:46:21.630092    1370 log.go:172] (0xc0003d04d0) (0xc000646000) Stream added, broadcasting: 5\nI0104 12:46:21.631224    1370 log.go:172] (0xc0003d04d0) Reply frame received for 5\nI0104 12:46:21.834636    1370 log.go:172] (0xc0003d04d0) Data frame received for 3\nI0104 12:46:21.834677    1370 log.go:172] (0xc000706000) (3) Data frame handling\nI0104 12:46:21.834696    1370 log.go:172] (0xc000706000) (3) Data frame sent\nI0104 12:46:21.948812    1370 log.go:172] (0xc0003d04d0) (0xc000706000) Stream removed, broadcasting: 3\nI0104 12:46:21.949099    1370 log.go:172] (0xc0003d04d0) (0xc000646000) Stream removed, broadcasting: 5\nI0104 12:46:21.949594    1370 log.go:172] (0xc0003d04d0) Data frame received for 1\nI0104 12:46:21.949625    1370 log.go:172] (0xc0005cb540) (1) Data frame handling\nI0104 12:46:21.949648    1370 log.go:172] (0xc0005cb540) (1) Data frame sent\nI0104 12:46:21.949668    1370 log.go:172] (0xc0003d04d0) (0xc0005cb540) Stream removed, broadcasting: 1\nI0104 12:46:21.949691    1370 log.go:172] (0xc0003d04d0) Go away received\nI0104 12:46:21.950136    1370 log.go:172] (0xc0003d04d0) (0xc0005cb540) Stream removed, broadcasting: 1\nI0104 12:46:21.950152    1370 log.go:172] (0xc0003d04d0) (0xc000706000) Stream removed, broadcasting: 3\nI0104 12:46:21.950158    1370 log.go:172] (0xc0003d04d0) (0xc000646000) Stream removed, broadcasting: 5\n"
Jan  4 12:46:21.958: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:46:21.958: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:46:21.958: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:46:21.969: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  4 12:46:32.015: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:46:32.015: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:46:32.015: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:46:32.101: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:32.101: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:32.101: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:32.101: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:32.101: INFO: 
Jan  4 12:46:32.101: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:34.229: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:34.229: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:34.229: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:34.229: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:34.229: INFO: 
Jan  4 12:46:34.229: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:35.257: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:35.258: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:35.258: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:35.258: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:35.258: INFO: 
Jan  4 12:46:35.258: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:36.292: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:36.292: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:36.293: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:36.293: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:36.293: INFO: 
Jan  4 12:46:36.293: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:37.594: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:37.594: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:37.594: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:37.594: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:37.594: INFO: 
Jan  4 12:46:37.594: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:38.698: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:38.698: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:38.698: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:38.698: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:38.699: INFO: 
Jan  4 12:46:38.699: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:40.112: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:40.113: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:40.113: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:40.113: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:40.113: INFO: 
Jan  4 12:46:40.113: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 12:46:41.156: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  4 12:46:41.156: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:22 +0000 UTC  }]
Jan  4 12:46:41.156: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:41.156: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:46:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:45:53 +0000 UTC  }]
Jan  4 12:46:41.156: INFO: 
Jan  4 12:46:41.156: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-46vkr
Jan  4 12:46:42.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:42.502: INFO: rc: 1
Jan  4 12:46:42.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001957c50 exit status 1   true [0xc0014c60c8 0xc0014c60e0 0xc0014c60f8] [0xc0014c60c8 0xc0014c60e0 0xc0014c60f8] [0xc0014c60d8 0xc0014c60f0] [0x935700 0x935700] 0xc001a5c480 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  4 12:46:52.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:46:52.739: INFO: rc: 1
Jan  4 12:46:52.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001e9a1b0 exit status 1   true [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a028 0xc00118a068] [0x935700 0x935700] 0xc001c8e300 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  4 12:47:02.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:02.871: INFO: rc: 1
Jan  4 12:47:02.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00210f7d0 exit status 1   true [0xc0004146e0 0xc000414768 0xc0004148d8] [0xc0004146e0 0xc000414768 0xc0004148d8] [0xc000414748 0xc0004148b8] [0x935700 0x935700] 0xc0025cca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:47:12.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:13.168: INFO: rc: 1
Jan  4 12:47:13.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a2d0 exit status 1   true [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a0a0 0xc00118a0e0] [0x935700 0x935700] 0xc001c8e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:47:23.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:23.309: INFO: rc: 1
Jan  4 12:47:23.309: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a420 exit status 1   true [0xc00118a0f0 0xc00118a108 0xc00118a120] [0xc00118a0f0 0xc00118a108 0xc00118a120] [0xc00118a100 0xc00118a118] [0x935700 0x935700] 0xc001c8ef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:47:33.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:33.482: INFO: rc: 1
Jan  4 12:47:33.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001957e00 exit status 1   true [0xc0014c6100 0xc0014c6118 0xc0014c6130] [0xc0014c6100 0xc0014c6118 0xc0014c6130] [0xc0014c6110 0xc0014c6128] [0x935700 0x935700] 0xc001a5c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:47:43.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:43.674: INFO: rc: 1
Jan  4 12:47:43.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a540 exit status 1   true [0xc00118a128 0xc00118a148 0xc00118a160] [0xc00118a128 0xc00118a148 0xc00118a160] [0xc00118a140 0xc00118a158] [0x935700 0x935700] 0xc001c8f3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:47:53.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:47:53.874: INFO: rc: 1
Jan  4 12:47:53.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001957f80 exit status 1   true [0xc0014c6138 0xc0014c6150 0xc0014c6168] [0xc0014c6138 0xc0014c6150 0xc0014c6168] [0xc0014c6148 0xc0014c6160] [0x935700 0x935700] 0xc001a5c9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:03.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:04.120: INFO: rc: 1
Jan  4 12:48:04.120: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00196a0f0 exit status 1   true [0xc0014c6170 0xc0014c6188 0xc0014c61a0] [0xc0014c6170 0xc0014c6188 0xc0014c61a0] [0xc0014c6180 0xc0014c6198] [0x935700 0x935700] 0xc001a5cde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:14.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:14.210: INFO: rc: 1
Jan  4 12:48:14.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00196a240 exit status 1   true [0xc0014c61a8 0xc0014c61c0 0xc0014c61d8] [0xc0014c61a8 0xc0014c61c0 0xc0014c61d8] [0xc0014c61b8 0xc0014c61d0] [0x935700 0x935700] 0xc001a5d0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:24.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:24.299: INFO: rc: 1
Jan  4 12:48:24.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00210f920 exit status 1   true [0xc0004148f0 0xc000414a08 0xc000414ab8] [0xc0004148f0 0xc000414a08 0xc000414ab8] [0xc0004149c0 0xc000414a68] [0x935700 0x935700] 0xc0025ccd20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:34.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:34.504: INFO: rc: 1
Jan  4 12:48:34.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018a2240 exit status 1   true [0xc0000e80f0 0xc00118a028 0xc00118a068] [0xc0000e80f0 0xc00118a028 0xc00118a068] [0xc00118a020 0xc00118a048] [0x935700 0x935700] 0xc001d15380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:44.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:44.677: INFO: rc: 1
Jan  4 12:48:44.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001956450 exit status 1   true [0xc0014c6000 0xc0014c6018 0xc0014c6030] [0xc0014c6000 0xc0014c6018 0xc0014c6030] [0xc0014c6010 0xc0014c6028] [0x935700 0x935700] 0xc001937500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:48:54.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:48:54.876: INFO: rc: 1
Jan  4 12:48:54.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a1e0 exit status 1   true [0xc000414178 0xc000414248 0xc0004143d0] [0xc000414178 0xc000414248 0xc0004143d0] [0xc000414230 0xc000414378] [0x935700 0x935700] 0xc001c8e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:04.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:05.049: INFO: rc: 1
Jan  4 12:49:05.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a330 exit status 1   true [0xc000414400 0xc0004144b0 0xc000414588] [0xc000414400 0xc0004144b0 0xc000414588] [0xc000414448 0xc000414528] [0x935700 0x935700] 0xc001c8e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:15.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:15.188: INFO: rc: 1
Jan  4 12:49:15.189: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a480 exit status 1   true [0xc0004145a0 0xc000414748 0xc0004148b8] [0xc0004145a0 0xc000414748 0xc0004148b8] [0xc000414728 0xc000414840] [0x935700 0x935700] 0xc001c8ef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:25.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:25.287: INFO: rc: 1
Jan  4 12:49:25.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9ab40 exit status 1   true [0xc0004148d8 0xc0004149c0 0xc000414a68] [0xc0004148d8 0xc0004149c0 0xc000414a68] [0xc000414938 0xc000414a20] [0x935700 0x935700] 0xc001c8f3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:35.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:35.402: INFO: rc: 1
Jan  4 12:49:35.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018a23f0 exit status 1   true [0xc00118a080 0xc00118a0a0 0xc00118a0e0] [0xc00118a080 0xc00118a0a0 0xc00118a0e0] [0xc00118a090 0xc00118a0d8] [0x935700 0x935700] 0xc001d159e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:45.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:45.621: INFO: rc: 1
Jan  4 12:49:45.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9ac60 exit status 1   true [0xc000414ab8 0xc000414bb0 0xc000414c98] [0xc000414ab8 0xc000414bb0 0xc000414c98] [0xc000414b80 0xc000414c70] [0x935700 0x935700] 0xc001c8f800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:49:55.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:49:55.765: INFO: rc: 1
Jan  4 12:49:55.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9ad80 exit status 1   true [0xc000414cb0 0xc000414d30 0xc000414dd0] [0xc000414cb0 0xc000414d30 0xc000414dd0] [0xc000414ce8 0xc000414d98] [0x935700 0x935700] 0xc001c8fc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:05.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:05.997: INFO: rc: 1
Jan  4 12:50:05.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9aea0 exit status 1   true [0xc000414df8 0xc000414ec8 0xc000414ef8] [0xc000414df8 0xc000414ec8 0xc000414ef8] [0xc000414ec0 0xc000414ef0] [0x935700 0x935700] 0xc001c8fec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:15.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:16.093: INFO: rc: 1
Jan  4 12:50:16.094: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9afc0 exit status 1   true [0xc000414f18 0xc000414f38 0xc000414fb0] [0xc000414f18 0xc000414f38 0xc000414fb0] [0xc000414f30 0xc000414f80] [0x935700 0x935700] 0xc001a5c3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:26.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:26.206: INFO: rc: 1
Jan  4 12:50:26.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9b0e0 exit status 1   true [0xc000414fb8 0xc000414ff8 0xc000415060] [0xc000414fb8 0xc000414ff8 0xc000415060] [0xc000414fc8 0xc000415058] [0x935700 0x935700] 0xc001a5c660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:36.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:36.334: INFO: rc: 1
Jan  4 12:50:36.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019564e0 exit status 1   true [0xc000184000 0xc0014c6010 0xc0014c6028] [0xc000184000 0xc0014c6010 0xc0014c6028] [0xc0014c6008 0xc0014c6020] [0x935700 0x935700] 0xc001c8e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:46.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:46.462: INFO: rc: 1
Jan  4 12:50:46.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a1b0 exit status 1   true [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a028 0xc00118a068] [0x935700 0x935700] 0xc001937500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:50:56.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:50:56.560: INFO: rc: 1
Jan  4 12:50:56.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00196a120 exit status 1   true [0xc000414178 0xc000414248 0xc0004143d0] [0xc000414178 0xc000414248 0xc0004143d0] [0xc000414230 0xc000414378] [0x935700 0x935700] 0xc001a5c420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:51:06.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:51:06.680: INFO: rc: 1
Jan  4 12:51:06.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001956660 exit status 1   true [0xc0014c6030 0xc0014c6048 0xc0014c6060] [0xc0014c6030 0xc0014c6048 0xc0014c6060] [0xc0014c6040 0xc0014c6058] [0x935700 0x935700] 0xc001c8e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:51:16.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:51:16.785: INFO: rc: 1
Jan  4 12:51:16.786: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a360 exit status 1   true [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a0a0 0xc00118a0e0] [0x935700 0x935700] 0xc001d15260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:51:26.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:51:26.917: INFO: rc: 1
Jan  4 12:51:26.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9a4b0 exit status 1   true [0xc00118a0f0 0xc00118a108 0xc00118a120] [0xc00118a0f0 0xc00118a108 0xc00118a120] [0xc00118a100 0xc00118a118] [0x935700 0x935700] 0xc001d155c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:51:36.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:51:37.157: INFO: rc: 1
Jan  4 12:51:37.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001e9aab0 exit status 1   true [0xc00118a128 0xc00118a148 0xc00118a160] [0xc00118a128 0xc00118a148 0xc00118a160] [0xc00118a140 0xc00118a158] [0x935700 0x935700] 0xc001d15bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  4 12:51:47.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-46vkr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:51:47.304: INFO: rc: 1
Jan  4 12:51:47.305: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan  4 12:51:47.305: INFO: Scaling statefulset ss to 0
Jan  4 12:51:47.501: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  4 12:51:47.517: INFO: Deleting all statefulset in ns e2e-tests-statefulset-46vkr
Jan  4 12:51:47.525: INFO: Scaling statefulset ss to 0
Jan  4 12:51:47.552: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:51:47.560: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:51:47.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-46vkr" for this suite.
Jan  4 12:51:55.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:51:56.410: INFO: namespace: e2e-tests-statefulset-46vkr, resource: bindings, ignored listing per whitelist
Jan  4 12:51:56.519: INFO: namespace e2e-tests-statefulset-46vkr deletion completed in 8.819927798s

• [SLOW TEST:394.427 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:51:56.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vj8k9
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vj8k9
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vj8k9
Jan  4 12:51:56.974: INFO: Found 0 stateful pods, waiting for 1
Jan  4 12:52:08.191: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 12:52:16.987: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  4 12:52:16.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:52:17.532: INFO: stderr: "I0104 12:52:17.147424    2012 log.go:172] (0xc0001386e0) (0xc0001212c0) Create stream\nI0104 12:52:17.147594    2012 log.go:172] (0xc0001386e0) (0xc0001212c0) Stream added, broadcasting: 1\nI0104 12:52:17.152459    2012 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0104 12:52:17.152501    2012 log.go:172] (0xc0001386e0) (0xc000121360) Create stream\nI0104 12:52:17.152521    2012 log.go:172] (0xc0001386e0) (0xc000121360) Stream added, broadcasting: 3\nI0104 12:52:17.154177    2012 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0104 12:52:17.154232    2012 log.go:172] (0xc0001386e0) (0xc00070e000) Create stream\nI0104 12:52:17.154252    2012 log.go:172] (0xc0001386e0) (0xc00070e000) Stream added, broadcasting: 5\nI0104 12:52:17.155930    2012 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0104 12:52:17.412433    2012 log.go:172] (0xc0001386e0) Data frame received for 3\nI0104 12:52:17.412483    2012 log.go:172] (0xc000121360) (3) Data frame handling\nI0104 12:52:17.412505    2012 log.go:172] (0xc000121360) (3) Data frame sent\nI0104 12:52:17.523347    2012 log.go:172] (0xc0001386e0) Data frame received for 1\nI0104 12:52:17.523450    2012 log.go:172] (0xc0001212c0) (1) Data frame handling\nI0104 12:52:17.523487    2012 log.go:172] (0xc0001212c0) (1) Data frame sent\nI0104 12:52:17.523514    2012 log.go:172] (0xc0001386e0) (0xc0001212c0) Stream removed, broadcasting: 1\nI0104 12:52:17.524751    2012 log.go:172] (0xc0001386e0) (0xc000121360) Stream removed, broadcasting: 3\nI0104 12:52:17.525584    2012 log.go:172] (0xc0001386e0) (0xc00070e000) Stream removed, broadcasting: 5\nI0104 12:52:17.525655    2012 log.go:172] (0xc0001386e0) (0xc0001212c0) Stream removed, broadcasting: 1\nI0104 12:52:17.525673    2012 log.go:172] (0xc0001386e0) (0xc000121360) Stream removed, broadcasting: 3\nI0104 12:52:17.525680    2012 log.go:172] (0xc0001386e0) (0xc00070e000) Stream removed, broadcasting: 5\n"
Jan  4 12:52:17.532: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:52:17.532: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:52:17.546: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  4 12:52:27.562: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:52:27.562: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:52:27.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999648s
Jan  4 12:52:28.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959378236s
Jan  4 12:52:29.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.888832023s
Jan  4 12:52:30.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.873127308s
Jan  4 12:52:31.783: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.846034408s
Jan  4 12:52:32.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.826687468s
Jan  4 12:52:33.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.801209569s
Jan  4 12:52:34.875: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.761011063s
Jan  4 12:52:35.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.734268111s
Jan  4 12:52:36.948: INFO: Verifying statefulset ss doesn't scale past 1 for another 680.293947ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vj8k9
Jan  4 12:52:37.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:52:38.693: INFO: stderr: "I0104 12:52:38.318154    2034 log.go:172] (0xc000720370) (0xc000746640) Create stream\nI0104 12:52:38.318288    2034 log.go:172] (0xc000720370) (0xc000746640) Stream added, broadcasting: 1\nI0104 12:52:38.324151    2034 log.go:172] (0xc000720370) Reply frame received for 1\nI0104 12:52:38.324216    2034 log.go:172] (0xc000720370) (0xc00070ac80) Create stream\nI0104 12:52:38.324227    2034 log.go:172] (0xc000720370) (0xc00070ac80) Stream added, broadcasting: 3\nI0104 12:52:38.325322    2034 log.go:172] (0xc000720370) Reply frame received for 3\nI0104 12:52:38.325368    2034 log.go:172] (0xc000720370) (0xc000558000) Create stream\nI0104 12:52:38.325412    2034 log.go:172] (0xc000720370) (0xc000558000) Stream added, broadcasting: 5\nI0104 12:52:38.326574    2034 log.go:172] (0xc000720370) Reply frame received for 5\nI0104 12:52:38.484055    2034 log.go:172] (0xc000720370) Data frame received for 3\nI0104 12:52:38.484295    2034 log.go:172] (0xc00070ac80) (3) Data frame handling\nI0104 12:52:38.484345    2034 log.go:172] (0xc00070ac80) (3) Data frame sent\nI0104 12:52:38.682059    2034 log.go:172] (0xc000720370) Data frame received for 1\nI0104 12:52:38.682345    2034 log.go:172] (0xc000720370) (0xc000558000) Stream removed, broadcasting: 5\nI0104 12:52:38.682443    2034 log.go:172] (0xc000746640) (1) Data frame handling\nI0104 12:52:38.682464    2034 log.go:172] (0xc000746640) (1) Data frame sent\nI0104 12:52:38.682589    2034 log.go:172] (0xc000720370) (0xc00070ac80) Stream removed, broadcasting: 3\nI0104 12:52:38.682647    2034 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0104 12:52:38.682671    2034 log.go:172] (0xc000720370) Go away received\nI0104 12:52:38.683442    2034 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0104 12:52:38.683463    2034 log.go:172] (0xc000720370) (0xc00070ac80) Stream removed, broadcasting: 3\nI0104 12:52:38.683469    2034 log.go:172] (0xc000720370) (0xc000558000) Stream removed, broadcasting: 5\n"
Jan  4 12:52:38.693: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:52:38.693: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:52:38.707: INFO: Found 1 stateful pods, waiting for 3
Jan  4 12:52:48.760: INFO: Found 2 stateful pods, waiting for 3
Jan  4 12:52:58.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:52:58.733: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:52:58.733: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 12:53:08.723: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:53:08.723: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 12:53:08.723: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  4 12:53:08.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:53:09.361: INFO: stderr: "I0104 12:53:08.995642    2056 log.go:172] (0xc000490370) (0xc0007ba640) Create stream\nI0104 12:53:08.996067    2056 log.go:172] (0xc000490370) (0xc0007ba640) Stream added, broadcasting: 1\nI0104 12:53:09.013341    2056 log.go:172] (0xc000490370) Reply frame received for 1\nI0104 12:53:09.013416    2056 log.go:172] (0xc000490370) (0xc00065cd20) Create stream\nI0104 12:53:09.013433    2056 log.go:172] (0xc000490370) (0xc00065cd20) Stream added, broadcasting: 3\nI0104 12:53:09.015130    2056 log.go:172] (0xc000490370) Reply frame received for 3\nI0104 12:53:09.015152    2056 log.go:172] (0xc000490370) (0xc0007ba6e0) Create stream\nI0104 12:53:09.015159    2056 log.go:172] (0xc000490370) (0xc0007ba6e0) Stream added, broadcasting: 5\nI0104 12:53:09.017016    2056 log.go:172] (0xc000490370) Reply frame received for 5\nI0104 12:53:09.210635    2056 log.go:172] (0xc000490370) Data frame received for 3\nI0104 12:53:09.210706    2056 log.go:172] (0xc00065cd20) (3) Data frame handling\nI0104 12:53:09.210731    2056 log.go:172] (0xc00065cd20) (3) Data frame sent\nI0104 12:53:09.353352    2056 log.go:172] (0xc000490370) (0xc00065cd20) Stream removed, broadcasting: 3\nI0104 12:53:09.353456    2056 log.go:172] (0xc000490370) Data frame received for 1\nI0104 12:53:09.353472    2056 log.go:172] (0xc0007ba640) (1) Data frame handling\nI0104 12:53:09.353487    2056 log.go:172] (0xc0007ba640) (1) Data frame sent\nI0104 12:53:09.353527    2056 log.go:172] (0xc000490370) (0xc0007ba6e0) Stream removed, broadcasting: 5\nI0104 12:53:09.353558    2056 log.go:172] (0xc000490370) (0xc0007ba640) Stream removed, broadcasting: 1\nI0104 12:53:09.353566    2056 log.go:172] (0xc000490370) Go away received\nI0104 12:53:09.353808    2056 log.go:172] (0xc000490370) (0xc0007ba640) Stream removed, broadcasting: 1\nI0104 12:53:09.353821    2056 log.go:172] (0xc000490370) (0xc00065cd20) Stream removed, broadcasting: 3\nI0104 12:53:09.353828    2056 log.go:172] (0xc000490370) (0xc0007ba6e0) Stream removed, broadcasting: 5\n"
Jan  4 12:53:09.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:53:09.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:53:09.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:53:10.158: INFO: stderr: "I0104 12:53:09.572856    2078 log.go:172] (0xc0006020b0) (0xc0007065a0) Create stream\nI0104 12:53:09.572975    2078 log.go:172] (0xc0006020b0) (0xc0007065a0) Stream added, broadcasting: 1\nI0104 12:53:09.576449    2078 log.go:172] (0xc0006020b0) Reply frame received for 1\nI0104 12:53:09.576488    2078 log.go:172] (0xc0006020b0) (0xc0001a4be0) Create stream\nI0104 12:53:09.576501    2078 log.go:172] (0xc0006020b0) (0xc0001a4be0) Stream added, broadcasting: 3\nI0104 12:53:09.577682    2078 log.go:172] (0xc0006020b0) Reply frame received for 3\nI0104 12:53:09.577707    2078 log.go:172] (0xc0006020b0) (0xc000706640) Create stream\nI0104 12:53:09.577713    2078 log.go:172] (0xc0006020b0) (0xc000706640) Stream added, broadcasting: 5\nI0104 12:53:09.579737    2078 log.go:172] (0xc0006020b0) Reply frame received for 5\nI0104 12:53:09.803485    2078 log.go:172] (0xc0006020b0) Data frame received for 3\nI0104 12:53:09.804154    2078 log.go:172] (0xc0001a4be0) (3) Data frame handling\nI0104 12:53:09.804296    2078 log.go:172] (0xc0001a4be0) (3) Data frame sent\nI0104 12:53:10.143485    2078 log.go:172] (0xc0006020b0) Data frame received for 1\nI0104 12:53:10.143649    2078 log.go:172] (0xc0006020b0) (0xc0001a4be0) Stream removed, broadcasting: 3\nI0104 12:53:10.143745    2078 log.go:172] (0xc0007065a0) (1) Data frame handling\nI0104 12:53:10.143800    2078 log.go:172] (0xc0007065a0) (1) Data frame sent\nI0104 12:53:10.143817    2078 log.go:172] (0xc0006020b0) (0xc000706640) Stream removed, broadcasting: 5\nI0104 12:53:10.144051    2078 log.go:172] (0xc0006020b0) (0xc0007065a0) Stream removed, broadcasting: 1\nI0104 12:53:10.144096    2078 log.go:172] (0xc0006020b0) Go away received\nI0104 12:53:10.144873    2078 log.go:172] (0xc0006020b0) (0xc0007065a0) Stream removed, broadcasting: 1\nI0104 12:53:10.144896    2078 log.go:172] (0xc0006020b0) (0xc0001a4be0) Stream removed, broadcasting: 3\nI0104 12:53:10.144913    2078 log.go:172] (0xc0006020b0) (0xc000706640) Stream removed, broadcasting: 5\n"
Jan  4 12:53:10.159: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:53:10.159: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:53:10.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 12:53:11.026: INFO: stderr: "I0104 12:53:10.556510    2100 log.go:172] (0xc0001386e0) (0xc000726640) Create stream\nI0104 12:53:10.556623    2100 log.go:172] (0xc0001386e0) (0xc000726640) Stream added, broadcasting: 1\nI0104 12:53:10.561916    2100 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0104 12:53:10.561997    2100 log.go:172] (0xc0001386e0) (0xc00065cd20) Create stream\nI0104 12:53:10.562017    2100 log.go:172] (0xc0001386e0) (0xc00065cd20) Stream added, broadcasting: 3\nI0104 12:53:10.563449    2100 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0104 12:53:10.563526    2100 log.go:172] (0xc0001386e0) (0xc000658000) Create stream\nI0104 12:53:10.563545    2100 log.go:172] (0xc0001386e0) (0xc000658000) Stream added, broadcasting: 5\nI0104 12:53:10.581200    2100 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0104 12:53:10.846736    2100 log.go:172] (0xc0001386e0) Data frame received for 3\nI0104 12:53:10.846784    2100 log.go:172] (0xc00065cd20) (3) Data frame handling\nI0104 12:53:10.846851    2100 log.go:172] (0xc00065cd20) (3) Data frame sent\nI0104 12:53:11.021416    2100 log.go:172] (0xc0001386e0) (0xc00065cd20) Stream removed, broadcasting: 3\nI0104 12:53:11.021526    2100 log.go:172] (0xc0001386e0) Data frame received for 1\nI0104 12:53:11.021547    2100 log.go:172] (0xc000726640) (1) Data frame handling\nI0104 12:53:11.021559    2100 log.go:172] (0xc000726640) (1) Data frame sent\nI0104 12:53:11.021572    2100 log.go:172] (0xc0001386e0) (0xc000726640) Stream removed, broadcasting: 1\nI0104 12:53:11.021615    2100 log.go:172] (0xc0001386e0) (0xc000658000) Stream removed, broadcasting: 5\nI0104 12:53:11.021655    2100 log.go:172] (0xc0001386e0) Go away received\nI0104 12:53:11.021731    2100 log.go:172] (0xc0001386e0) (0xc000726640) Stream removed, broadcasting: 1\nI0104 12:53:11.021747    2100 log.go:172] (0xc0001386e0) (0xc00065cd20) Stream removed, broadcasting: 3\nI0104 12:53:11.021759    2100 log.go:172] (0xc0001386e0) (0xc000658000) Stream removed, broadcasting: 5\n"
Jan  4 12:53:11.027: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 12:53:11.027: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 12:53:11.027: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:53:11.042: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  4 12:53:21.176: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:53:21.177: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:53:21.177: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 12:53:21.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995654s
Jan  4 12:53:22.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969231452s
Jan  4 12:53:23.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.916643375s
Jan  4 12:53:24.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.894437923s
Jan  4 12:53:25.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.868538715s
Jan  4 12:53:26.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.848486992s
Jan  4 12:53:27.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.781265565s
Jan  4 12:53:28.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.755786972s
Jan  4 12:53:29.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.732065836s
Jan  4 12:53:30.510: INFO: Verifying statefulset ss doesn't scale past 3 for another 721.117581ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vj8k9
Jan  4 12:53:31.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:53:32.380: INFO: stderr: "I0104 12:53:31.837303    2121 log.go:172] (0xc0007202c0) (0xc000645400) Create stream\nI0104 12:53:31.837626    2121 log.go:172] (0xc0007202c0) (0xc000645400) Stream added, broadcasting: 1\nI0104 12:53:31.843631    2121 log.go:172] (0xc0007202c0) Reply frame received for 1\nI0104 12:53:31.843732    2121 log.go:172] (0xc0007202c0) (0xc00037e000) Create stream\nI0104 12:53:31.843765    2121 log.go:172] (0xc0007202c0) (0xc00037e000) Stream added, broadcasting: 3\nI0104 12:53:31.845325    2121 log.go:172] (0xc0007202c0) Reply frame received for 3\nI0104 12:53:31.845345    2121 log.go:172] (0xc0007202c0) (0xc0006454a0) Create stream\nI0104 12:53:31.845350    2121 log.go:172] (0xc0007202c0) (0xc0006454a0) Stream added, broadcasting: 5\nI0104 12:53:31.846764    2121 log.go:172] (0xc0007202c0) Reply frame received for 5\nI0104 12:53:31.991526    2121 log.go:172] (0xc0007202c0) Data frame received for 3\nI0104 12:53:31.991577    2121 log.go:172] (0xc00037e000) (3) Data frame handling\nI0104 12:53:31.991592    2121 log.go:172] (0xc00037e000) (3) Data frame sent\nI0104 12:53:32.366960    2121 log.go:172] (0xc0007202c0) (0xc00037e000) Stream removed, broadcasting: 3\nI0104 12:53:32.367162    2121 log.go:172] (0xc0007202c0) Data frame received for 1\nI0104 12:53:32.367334    2121 log.go:172] (0xc0007202c0) (0xc0006454a0) Stream removed, broadcasting: 5\nI0104 12:53:32.367672    2121 log.go:172] (0xc000645400) (1) Data frame handling\nI0104 12:53:32.367782    2121 log.go:172] (0xc000645400) (1) Data frame sent\nI0104 12:53:32.367800    2121 log.go:172] (0xc0007202c0) (0xc000645400) Stream removed, broadcasting: 1\nI0104 12:53:32.367871    2121 log.go:172] (0xc0007202c0) Go away received\nI0104 12:53:32.368210    2121 log.go:172] (0xc0007202c0) (0xc000645400) Stream removed, broadcasting: 1\nI0104 12:53:32.368235    2121 log.go:172] (0xc0007202c0) (0xc00037e000) Stream removed, broadcasting: 3\nI0104 12:53:32.368249    2121 log.go:172] (0xc0007202c0) (0xc0006454a0) Stream removed, broadcasting: 5\n"
Jan  4 12:53:32.380: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:53:32.380: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:53:32.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:53:33.023: INFO: stderr: "I0104 12:53:32.623522    2143 log.go:172] (0xc000726370) (0xc00076a640) Create stream\nI0104 12:53:32.623708    2143 log.go:172] (0xc000726370) (0xc00076a640) Stream added, broadcasting: 1\nI0104 12:53:32.632137    2143 log.go:172] (0xc000726370) Reply frame received for 1\nI0104 12:53:32.632174    2143 log.go:172] (0xc000726370) (0xc0005d8c80) Create stream\nI0104 12:53:32.632181    2143 log.go:172] (0xc000726370) (0xc0005d8c80) Stream added, broadcasting: 3\nI0104 12:53:32.633226    2143 log.go:172] (0xc000726370) Reply frame received for 3\nI0104 12:53:32.633243    2143 log.go:172] (0xc000726370) (0xc00076a6e0) Create stream\nI0104 12:53:32.633248    2143 log.go:172] (0xc000726370) (0xc00076a6e0) Stream added, broadcasting: 5\nI0104 12:53:32.634248    2143 log.go:172] (0xc000726370) Reply frame received for 5\nI0104 12:53:32.736215    2143 log.go:172] (0xc000726370) Data frame received for 3\nI0104 12:53:32.736244    2143 log.go:172] (0xc0005d8c80) (3) Data frame handling\nI0104 12:53:32.736260    2143 log.go:172] (0xc0005d8c80) (3) Data frame sent\nI0104 12:53:33.017862    2143 log.go:172] (0xc000726370) (0xc0005d8c80) Stream removed, broadcasting: 3\nI0104 12:53:33.017968    2143 log.go:172] (0xc000726370) Data frame received for 1\nI0104 12:53:33.017994    2143 log.go:172] (0xc00076a640) (1) Data frame handling\nI0104 12:53:33.018004    2143 log.go:172] (0xc00076a640) (1) Data frame sent\nI0104 12:53:33.018016    2143 log.go:172] (0xc000726370) (0xc00076a640) Stream removed, broadcasting: 1\nI0104 12:53:33.018029    2143 log.go:172] (0xc000726370) (0xc00076a6e0) Stream removed, broadcasting: 5\nI0104 12:53:33.018065    2143 log.go:172] (0xc000726370) Go away received\nI0104 12:53:33.018172    2143 log.go:172] (0xc000726370) (0xc00076a640) Stream removed, broadcasting: 1\nI0104 12:53:33.018193    2143 log.go:172] (0xc000726370) (0xc0005d8c80) Stream removed, broadcasting: 3\nI0104 12:53:33.018198    2143 log.go:172] (0xc000726370) (0xc00076a6e0) Stream removed, broadcasting: 5\n"
Jan  4 12:53:33.023: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 12:53:33.023: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 12:53:33.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:53:33.321: INFO: rc: 126
Jan  4 12:53:33.322: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0104 12:53:33.276753    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Create stream
I0104 12:53:33.277004    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream added, broadcasting: 1
I0104 12:53:33.284796    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 1
I0104 12:53:33.284843    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Create stream
I0104 12:53:33.284859    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream added, broadcasting: 3
I0104 12:53:33.286054    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 3
I0104 12:53:33.286082    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Create stream
I0104 12:53:33.286098    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream added, broadcasting: 5
I0104 12:53:33.287541    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 5
I0104 12:53:33.313724    2165 log.go:172] (0xc0006ba0b0) Data frame received for 3
I0104 12:53:33.313745    2165 log.go:172] (0xc000714000) (3) Data frame handling
I0104 12:53:33.313777    2165 log.go:172] (0xc000714000) (3) Data frame sent
I0104 12:53:33.316740    2165 log.go:172] (0xc0006ba0b0) Data frame received for 1
I0104 12:53:33.316768    2165 log.go:172] (0xc0005e26e0) (1) Data frame handling
I0104 12:53:33.316786    2165 log.go:172] (0xc0005e26e0) (1) Data frame sent
I0104 12:53:33.316807    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream removed, broadcasting: 1
I0104 12:53:33.316948    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream removed, broadcasting: 3
I0104 12:53:33.317975    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream removed, broadcasting: 5
I0104 12:53:33.317990    2165 log.go:172] (0xc0006ba0b0) Go away received
I0104 12:53:33.318055    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream removed, broadcasting: 1
I0104 12:53:33.318066    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream removed, broadcasting: 3
I0104 12:53:33.318073    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc00210ed80 exit status 126   true [0xc00118a170 0xc00118a188 0xc00118a1a0] [0xc00118a170 0xc00118a188 0xc00118a1a0] [0xc00118a180 0xc00118a198] [0x935700 0x935700] 0xc0014cd500 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0104 12:53:33.276753    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Create stream
I0104 12:53:33.277004    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream added, broadcasting: 1
I0104 12:53:33.284796    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 1
I0104 12:53:33.284843    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Create stream
I0104 12:53:33.284859    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream added, broadcasting: 3
I0104 12:53:33.286054    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 3
I0104 12:53:33.286082    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Create stream
I0104 12:53:33.286098    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream added, broadcasting: 5
I0104 12:53:33.287541    2165 log.go:172] (0xc0006ba0b0) Reply frame received for 5
I0104 12:53:33.313724    2165 log.go:172] (0xc0006ba0b0) Data frame received for 3
I0104 12:53:33.313745    2165 log.go:172] (0xc000714000) (3) Data frame handling
I0104 12:53:33.313777    2165 log.go:172] (0xc000714000) (3) Data frame sent
I0104 12:53:33.316740    2165 log.go:172] (0xc0006ba0b0) Data frame received for 1
I0104 12:53:33.316768    2165 log.go:172] (0xc0005e26e0) (1) Data frame handling
I0104 12:53:33.316786    2165 log.go:172] (0xc0005e26e0) (1) Data frame sent
I0104 12:53:33.316807    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream removed, broadcasting: 1
I0104 12:53:33.316948    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream removed, broadcasting: 3
I0104 12:53:33.317975    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream removed, broadcasting: 5
I0104 12:53:33.317990    2165 log.go:172] (0xc0006ba0b0) Go away received
I0104 12:53:33.318055    2165 log.go:172] (0xc0006ba0b0) (0xc0005e26e0) Stream removed, broadcasting: 1
I0104 12:53:33.318066    2165 log.go:172] (0xc0006ba0b0) (0xc000714000) Stream removed, broadcasting: 3
I0104 12:53:33.318073    2165 log.go:172] (0xc0006ba0b0) (0xc0003f4d20) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Jan  4 12:53:43.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:53:43.483: INFO: rc: 1
Jan  4 12:53:43.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c35530 exit status 1   true [0xc001c502b8 0xc001c502d0 0xc001c502e8] [0xc001c502b8 0xc001c502d0 0xc001c502e8] [0xc001c502c8 0xc001c502e0] [0x935700 0x935700] 0xc001a58000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:53:53.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:53:53.649: INFO: rc: 1
Jan  4 12:53:53.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001956e40 exit status 1   true [0xc001a5e0c0 0xc001a5e0d8 0xc001a5e0f0] [0xc001a5e0c0 0xc001a5e0d8 0xc001a5e0f0] [0xc001a5e0d0 0xc001a5e0e8] [0x935700 0x935700] 0xc001a89e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:03.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:03.807: INFO: rc: 1
Jan  4 12:54:03.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00210ef00 exit status 1   true [0xc00118a1a8 0xc00118a1c0 0xc00118a1d8] [0xc00118a1a8 0xc00118a1c0 0xc00118a1d8] [0xc00118a1b8 0xc00118a1d0] [0x935700 0x935700] 0xc0014cd860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:13.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:13.923: INFO: rc: 1
Jan  4 12:54:13.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c35770 exit status 1   true [0xc001c502f0 0xc001c50308 0xc001c50320] [0xc001c502f0 0xc001c50308 0xc001c50320] [0xc001c50300 0xc001c50318] [0x935700 0x935700] 0xc001a58600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:23.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:24.015: INFO: rc: 1
Jan  4 12:54:24.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c35890 exit status 1   true [0xc001c50328 0xc001c50340 0xc001c50358] [0xc001c50328 0xc001c50340 0xc001c50358] [0xc001c50338 0xc001c50350] [0x935700 0x935700] 0xc001a593e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:34.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:34.148: INFO: rc: 1
Jan  4 12:54:34.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0017675c0 exit status 1   true [0xc0014ac1e8 0xc0014ac200 0xc0014ac218] [0xc0014ac1e8 0xc0014ac200 0xc0014ac218] [0xc0014ac1f8 0xc0014ac210] [0x935700 0x935700] 0xc0014ab2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:44.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:44.306: INFO: rc: 1
Jan  4 12:54:44.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a2240 exit status 1   true [0xc000184000 0xc001c50038 0xc001c50070] [0xc000184000 0xc001c50038 0xc001c50070] [0xc001c50020 0xc001c50060] [0x935700 0x935700] 0xc0014ce540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:54:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:54:54.459: INFO: rc: 1
Jan  4 12:54:54.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e9a1b0 exit status 1   true [0xc0014ac000 0xc0014ac018 0xc0014ac030] [0xc0014ac000 0xc0014ac018 0xc0014ac030] [0xc0014ac010 0xc0014ac028] [0x935700 0x935700] 0xc0010665a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:04.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:04.608: INFO: rc: 1
Jan  4 12:55:04.609: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c150 exit status 1   true [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a000 0xc00118a030 0xc00118a080] [0xc00118a028 0xc00118a068] [0x935700 0x935700] 0xc001a88c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:14.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:14.709: INFO: rc: 1
Jan  4 12:55:14.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc2120 exit status 1   true [0xc001a5e000 0xc001a5e018 0xc001a5e030] [0xc001a5e000 0xc001a5e018 0xc001a5e030] [0xc001a5e010 0xc001a5e028] [0x935700 0x935700] 0xc0016bf1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:24.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:24.831: INFO: rc: 1
Jan  4 12:55:24.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c330 exit status 1   true [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a088 0xc00118a0c0 0xc00118a0e8] [0xc00118a0a0 0xc00118a0e0] [0x935700 0x935700] 0xc001a89b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:34.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:34.940: INFO: rc: 1
Jan  4 12:55:34.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a2420 exit status 1   true [0xc001c50098 0xc001c500d8 0xc001c50118] [0xc001c50098 0xc001c500d8 0xc001c50118] [0xc001c500c0 0xc001c50108] [0x935700 0x935700] 0xc00195c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:44.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:45.047: INFO: rc: 1
Jan  4 12:55:45.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a2720 exit status 1   true [0xc001c50140 0xc001c501a8 0xc001c501c0] [0xc001c50140 0xc001c501a8 0xc001c501c0] [0xc001c50180 0xc001c501b8] [0x935700 0x935700] 0xc00195d560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:55:55.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:55:55.197: INFO: rc: 1
Jan  4 12:55:55.197: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc22d0 exit status 1   true [0xc001a5e038 0xc001a5e050 0xc001a5e068] [0xc001a5e038 0xc001a5e050 0xc001a5e068] [0xc001a5e048 0xc001a5e060] [0x935700 0x935700] 0xc0016bfc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:05.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:05.335: INFO: rc: 1
Jan  4 12:56:05.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc2450 exit status 1   true [0xc001a5e070 0xc001a5e090 0xc001a5e0a8] [0xc001a5e070 0xc001a5e090 0xc001a5e0a8] [0xc001a5e080 0xc001a5e0a0] [0x935700 0x935700] 0xc00143aae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:15.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:15.415: INFO: rc: 1
Jan  4 12:56:15.415: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc25d0 exit status 1   true [0xc001a5e0b0 0xc001a5e0c8 0xc001a5e0e0] [0xc001a5e0b0 0xc001a5e0c8 0xc001a5e0e0] [0xc001a5e0c0 0xc001a5e0d8] [0x935700 0x935700] 0xc0017ca6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:25.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:25.524: INFO: rc: 1
Jan  4 12:56:25.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a28d0 exit status 1   true [0xc001c501c8 0xc001c501e8 0xc001c50200] [0xc001c501c8 0xc001c501e8 0xc001c50200] [0xc001c501d8 0xc001c501f8] [0x935700 0x935700] 0xc00195df80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:35.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:35.709: INFO: rc: 1
Jan  4 12:56:35.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a2a20 exit status 1   true [0xc001c50208 0xc001c50220 0xc001c50238] [0xc001c50208 0xc001c50220 0xc001c50238] [0xc001c50218 0xc001c50230] [0x935700 0x935700] 0xc0012945a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:45.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:45.918: INFO: rc: 1
Jan  4 12:56:45.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c1b0 exit status 1   true [0xc0000e80f0 0xc00118a028 0xc00118a068] [0xc0000e80f0 0xc00118a028 0xc00118a068] [0xc00118a020 0xc00118a048] [0x935700 0x935700] 0xc001439c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:56:55.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:56:56.096: INFO: rc: 1
Jan  4 12:56:56.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc2180 exit status 1   true [0xc001a5e000 0xc001a5e018 0xc001a5e030] [0xc001a5e000 0xc001a5e018 0xc001a5e030] [0xc001a5e010 0xc001a5e028] [0x935700 0x935700] 0xc00143b9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:06.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:06.181: INFO: rc: 1
Jan  4 12:57:06.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a2270 exit status 1   true [0xc001c50010 0xc001c50050 0xc001c50098] [0xc001c50010 0xc001c50050 0xc001c50098] [0xc001c50038 0xc001c50070] [0x935700 0x935700] 0xc00195cf00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:16.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:16.352: INFO: rc: 1
Jan  4 12:57:16.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c3c0 exit status 1   true [0xc00118a080 0xc00118a0a0 0xc00118a0e0] [0xc00118a080 0xc00118a0a0 0xc00118a0e0] [0xc00118a090 0xc00118a0d8] [0x935700 0x935700] 0xc0016bee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:26.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:26.483: INFO: rc: 1
Jan  4 12:57:26.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c540 exit status 1   true [0xc00118a0e8 0xc00118a100 0xc00118a118] [0xc00118a0e8 0xc00118a100 0xc00118a118] [0xc00118a0f8 0xc00118a110] [0x935700 0x935700] 0xc0016bf9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:36.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:36.577: INFO: rc: 1
Jan  4 12:57:36.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a9c690 exit status 1   true [0xc00118a120 0xc00118a140 0xc00118a158] [0xc00118a120 0xc00118a140 0xc00118a158] [0xc00118a138 0xc00118a150] [0x935700 0x935700] 0xc00244c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:46.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:46.683: INFO: rc: 1
Jan  4 12:57:46.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018a25d0 exit status 1   true [0xc001c500a8 0xc001c500e8 0xc001c50140] [0xc001c500a8 0xc001c500e8 0xc001c50140] [0xc001c500d8 0xc001c50118] [0x935700 0x935700] 0xc00195d920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:57:56.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:57:56.801: INFO: rc: 1
Jan  4 12:57:56.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc2360 exit status 1   true [0xc001a5e038 0xc001a5e050 0xc001a5e068] [0xc001a5e038 0xc001a5e050 0xc001a5e068] [0xc001a5e048 0xc001a5e060] [0x935700 0x935700] 0xc0014ce420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:58:06.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:58:07.161: INFO: rc: 1
Jan  4 12:58:07.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc24e0 exit status 1   true [0xc001a5e070 0xc001a5e090 0xc001a5e0a8] [0xc001a5e070 0xc001a5e090 0xc001a5e0a8] [0xc001a5e080 0xc001a5e0a0] [0x935700 0x935700] 0xc001a88720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:58:17.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:58:18.950: INFO: rc: 1
Jan  4 12:58:18.951: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc26c0 exit status 1   true [0xc001a5e0b0 0xc001a5e0c8 0xc001a5e0e0] [0xc001a5e0b0 0xc001a5e0c8 0xc001a5e0e0] [0xc001a5e0c0 0xc001a5e0d8] [0x935700 0x935700] 0xc001a89a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:58:28.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:58:29.116: INFO: rc: 1
Jan  4 12:58:29.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000fc28d0 exit status 1   true [0xc001a5e0e8 0xc001a5e100 0xc001a5e118] [0xc001a5e0e8 0xc001a5e100 0xc001a5e118] [0xc001a5e0f8 0xc001a5e110] [0x935700 0x935700] 0xc0017ca6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  4 12:58:39.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vj8k9 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 12:58:39.203: INFO: rc: 1
Jan  4 12:58:39.203: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  4 12:58:39.203: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  4 12:58:39.272: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vj8k9
Jan  4 12:58:39.281: INFO: Scaling statefulset ss to 0
Jan  4 12:58:39.341: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 12:58:39.347: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:58:39.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vj8k9" for this suite.
Jan  4 12:58:47.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:58:47.669: INFO: namespace: e2e-tests-statefulset-vj8k9, resource: bindings, ignored listing per whitelist
Jan  4 12:58:47.805: INFO: namespace e2e-tests-statefulset-vj8k9 deletion completed in 8.407341346s

• [SLOW TEST:411.285 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:58:47.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w6fl8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 12:58:48.510: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 12:59:28.912: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-w6fl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 12:59:28.912: INFO: >>> kubeConfig: /root/.kube/config
I0104 12:59:28.986173       8 log.go:172] (0xc0003c8580) (0xc00117b860) Create stream
I0104 12:59:28.986246       8 log.go:172] (0xc0003c8580) (0xc00117b860) Stream added, broadcasting: 1
I0104 12:59:28.997893       8 log.go:172] (0xc0003c8580) Reply frame received for 1
I0104 12:59:28.997942       8 log.go:172] (0xc0003c8580) (0xc0013bc0a0) Create stream
I0104 12:59:28.997950       8 log.go:172] (0xc0003c8580) (0xc0013bc0a0) Stream added, broadcasting: 3
I0104 12:59:29.000025       8 log.go:172] (0xc0003c8580) Reply frame received for 3
I0104 12:59:29.000066       8 log.go:172] (0xc0003c8580) (0xc00117b900) Create stream
I0104 12:59:29.000085       8 log.go:172] (0xc0003c8580) (0xc00117b900) Stream added, broadcasting: 5
I0104 12:59:29.002112       8 log.go:172] (0xc0003c8580) Reply frame received for 5
I0104 12:59:29.309328       8 log.go:172] (0xc0003c8580) Data frame received for 3
I0104 12:59:29.309380       8 log.go:172] (0xc0013bc0a0) (3) Data frame handling
I0104 12:59:29.309394       8 log.go:172] (0xc0013bc0a0) (3) Data frame sent
I0104 12:59:29.471943       8 log.go:172] (0xc0003c8580) (0xc0013bc0a0) Stream removed, broadcasting: 3
I0104 12:59:29.472036       8 log.go:172] (0xc0003c8580) Data frame received for 1
I0104 12:59:29.472112       8 log.go:172] (0xc0003c8580) (0xc00117b900) Stream removed, broadcasting: 5
I0104 12:59:29.472218       8 log.go:172] (0xc00117b860) (1) Data frame handling
I0104 12:59:29.472243       8 log.go:172] (0xc00117b860) (1) Data frame sent
I0104 12:59:29.472253       8 log.go:172] (0xc0003c8580) (0xc00117b860) Stream removed, broadcasting: 1
I0104 12:59:29.472268       8 log.go:172] (0xc0003c8580) Go away received
I0104 12:59:29.472519       8 log.go:172] (0xc0003c8580) (0xc00117b860) Stream removed, broadcasting: 1
I0104 12:59:29.472537       8 log.go:172] (0xc0003c8580) (0xc0013bc0a0) Stream removed, broadcasting: 3
I0104 12:59:29.472553       8 log.go:172] (0xc0003c8580) (0xc00117b900) Stream removed, broadcasting: 5
Jan  4 12:59:29.472: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 12:59:29.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-w6fl8" for this suite.
Jan  4 12:59:53.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 12:59:53.648: INFO: namespace: e2e-tests-pod-network-test-w6fl8, resource: bindings, ignored listing per whitelist
Jan  4 12:59:53.683: INFO: namespace e2e-tests-pod-network-test-w6fl8 deletion completed in 24.188348032s

• [SLOW TEST:65.877 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 12:59:53.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-19f887b2-2ef2-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 12:59:54.119: INFO: Waiting up to 5m0s for pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-8r9kv" to be "success or failure"
Jan  4 12:59:54.134: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871484ms
Jan  4 12:59:56.149: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030105384s
Jan  4 12:59:58.169: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050435326s
Jan  4 13:00:00.183: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064606338s
Jan  4 13:00:02.535: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416086188s
Jan  4 13:00:04.739: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.620266714s
STEP: Saw pod success
Jan  4 13:00:04.739: INFO: Pod "pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:00:04.754: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:00:05.068: INFO: Waiting for pod pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006 to disappear
Jan  4 13:00:05.155: INFO: Pod pod-secrets-19f967a5-2ef2-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:00:05.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8r9kv" for this suite.
Jan  4 13:00:11.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:00:11.282: INFO: namespace: e2e-tests-secrets-8r9kv, resource: bindings, ignored listing per whitelist
Jan  4 13:00:11.423: INFO: namespace e2e-tests-secrets-8r9kv deletion completed in 6.253736984s

• [SLOW TEST:17.740 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:00:11.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  4 13:00:11.739: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8gfhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-8gfhc/configmaps/e2e-watch-test-watch-closed,UID:246ea5c5-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17143262,Generation:0,CreationTimestamp:2020-01-04 13:00:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 13:00:11.740: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8gfhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-8gfhc/configmaps/e2e-watch-test-watch-closed,UID:246ea5c5-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17143263,Generation:0,CreationTimestamp:2020-01-04 13:00:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  4 13:00:11.778: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8gfhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-8gfhc/configmaps/e2e-watch-test-watch-closed,UID:246ea5c5-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17143265,Generation:0,CreationTimestamp:2020-01-04 13:00:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 13:00:11.778: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8gfhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-8gfhc/configmaps/e2e-watch-test-watch-closed,UID:246ea5c5-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17143266,Generation:0,CreationTimestamp:2020-01-04 13:00:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:00:11.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-8gfhc" for this suite.
Jan  4 13:00:17.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:00:18.174: INFO: namespace: e2e-tests-watch-8gfhc, resource: bindings, ignored listing per whitelist
Jan  4 13:00:18.177: INFO: namespace e2e-tests-watch-8gfhc deletion completed in 6.390025016s

• [SLOW TEST:6.753 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:00:18.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:00:18.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-m7899" to be "success or failure"
Jan  4 13:00:18.643: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.352382ms
Jan  4 13:00:20.700: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071907617s
Jan  4 13:00:22.705: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077258093s
Jan  4 13:00:24.870: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242567744s
Jan  4 13:00:27.022: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394379471s
Jan  4 13:00:29.072: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.444440733s
Jan  4 13:00:31.158: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.529594693s
STEP: Saw pod success
Jan  4 13:00:31.158: INFO: Pod "downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:00:31.168: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:00:31.244: INFO: Waiting for pod downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006 to disappear
Jan  4 13:00:31.346: INFO: Pod downwardapi-volume-288193e8-2ef2-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:00:31.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m7899" for this suite.
Jan  4 13:00:37.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:00:37.644: INFO: namespace: e2e-tests-projected-m7899, resource: bindings, ignored listing per whitelist
Jan  4 13:00:37.674: INFO: namespace e2e-tests-projected-m7899 deletion completed in 6.310213999s

• [SLOW TEST:19.497 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:00:37.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d2tvl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d2tvl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 13:00:54.266: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.269: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.273: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.280: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.284: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.286: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.293: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.373: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.388: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.395: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006: the server could not find the requested resource (get pods dns-test-34125d64-2ef2-11ea-9996-0242ac110006)
Jan  4 13:00:54.395: INFO: Lookups using e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-d2tvl.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  4 13:00:59.765: INFO: DNS probes using e2e-tests-dns-d2tvl/dns-test-34125d64-2ef2-11ea-9996-0242ac110006 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:00:59.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-d2tvl" for this suite.
Jan  4 13:01:07.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:01:08.157: INFO: namespace: e2e-tests-dns-d2tvl, resource: bindings, ignored listing per whitelist
Jan  4 13:01:08.193: INFO: namespace e2e-tests-dns-d2tvl deletion completed in 8.249618412s

• [SLOW TEST:30.519 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:01:08.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-v5qxf
Jan  4 13:01:18.843: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-v5qxf
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 13:01:18.916: INFO: Initial restart count of pod liveness-http is 0
Jan  4 13:01:41.380: INFO: Restart count of pod e2e-tests-container-probe-v5qxf/liveness-http is now 1 (22.464021829s elapsed)
Jan  4 13:02:01.650: INFO: Restart count of pod e2e-tests-container-probe-v5qxf/liveness-http is now 2 (42.734308605s elapsed)
Jan  4 13:02:23.937: INFO: Restart count of pod e2e-tests-container-probe-v5qxf/liveness-http is now 3 (1m5.02143681s elapsed)
Jan  4 13:02:42.841: INFO: Restart count of pod e2e-tests-container-probe-v5qxf/liveness-http is now 4 (1m23.92541721s elapsed)
Jan  4 13:03:47.532: INFO: Restart count of pod e2e-tests-container-probe-v5qxf/liveness-http is now 5 (2m28.616374053s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:03:47.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-v5qxf" for this suite.
Jan  4 13:03:55.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:03:55.913: INFO: namespace: e2e-tests-container-probe-v5qxf, resource: bindings, ignored listing per whitelist
Jan  4 13:03:55.985: INFO: namespace e2e-tests-container-probe-v5qxf deletion completed in 8.237619404s

• [SLOW TEST:167.791 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:03:55.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 13:03:56.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-bw6bz'
Jan  4 13:03:58.231: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 13:03:58.231: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  4 13:04:02.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bw6bz'
Jan  4 13:04:02.918: INFO: stderr: ""
Jan  4 13:04:02.918: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:04:02.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bw6bz" for this suite.
Jan  4 13:04:09.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:04:09.276: INFO: namespace: e2e-tests-kubectl-bw6bz, resource: bindings, ignored listing per whitelist
Jan  4 13:04:09.366: INFO: namespace e2e-tests-kubectl-bw6bz deletion completed in 6.399564248s

• [SLOW TEST:13.380 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:04:09.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:04:09.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:04:19.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-22vsc" for this suite.
Jan  4 13:05:21.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:05:22.169: INFO: namespace: e2e-tests-pods-22vsc, resource: bindings, ignored listing per whitelist
Jan  4 13:05:22.221: INFO: namespace e2e-tests-pods-22vsc deletion completed in 1m2.44043959s

• [SLOW TEST:72.855 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:05:22.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:05:22.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-4z5cj" to be "success or failure"
Jan  4 13:05:22.710: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025984ms
Jan  4 13:05:24.728: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026871319s
Jan  4 13:05:26.742: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040812637s
Jan  4 13:05:28.815: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113927935s
Jan  4 13:05:30.988: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.286061162s
Jan  4 13:05:33.019: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.31763675s
Jan  4 13:05:35.052: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.350637689s
STEP: Saw pod success
Jan  4 13:05:35.052: INFO: Pod "downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:05:35.080: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:05:35.295: INFO: Waiting for pod downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006 to disappear
Jan  4 13:05:35.398: INFO: Pod downwardapi-volume-ddc6433b-2ef2-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:05:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4z5cj" for this suite.
Jan  4 13:05:43.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:05:43.853: INFO: namespace: e2e-tests-projected-4z5cj, resource: bindings, ignored listing per whitelist
Jan  4 13:05:43.908: INFO: namespace e2e-tests-projected-4z5cj deletion completed in 8.495732696s

• [SLOW TEST:21.687 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:05:43.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:05:44.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-cpcrc" to be "success or failure"
Jan  4 13:05:44.416: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 192.858336ms
Jan  4 13:05:46.477: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253900239s
Jan  4 13:05:48.533: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31032435s
Jan  4 13:05:50.594: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37044567s
Jan  4 13:05:52.786: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562491944s
Jan  4 13:05:54.802: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.579041517s
STEP: Saw pod success
Jan  4 13:05:54.802: INFO: Pod "downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:05:54.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:05:55.718: INFO: Waiting for pod downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006 to disappear
Jan  4 13:05:55.747: INFO: Pod downwardapi-volume-eaa253be-2ef2-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:05:55.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cpcrc" for this suite.
Jan  4 13:06:01.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:06:02.056: INFO: namespace: e2e-tests-projected-cpcrc, resource: bindings, ignored listing per whitelist
Jan  4 13:06:02.182: INFO: namespace e2e-tests-projected-cpcrc deletion completed in 6.421046337s

• [SLOW TEST:18.273 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:06:02.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:06:02.834: INFO: Creating deployment "nginx-deployment"
Jan  4 13:06:02.867: INFO: Waiting for observed generation 1
Jan  4 13:06:05.200: INFO: Waiting for all required pods to come up
Jan  4 13:06:07.497: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  4 13:07:14.934: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  4 13:07:15.020: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  4 13:07:15.069: INFO: Updating deployment nginx-deployment
Jan  4 13:07:15.069: INFO: Waiting for observed generation 2
Jan  4 13:07:22.271: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  4 13:07:22.469: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  4 13:07:22.784: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  4 13:07:24.740: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  4 13:07:24.740: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  4 13:07:25.145: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  4 13:07:26.722: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  4 13:07:26.722: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  4 13:07:27.891: INFO: Updating deployment nginx-deployment
Jan  4 13:07:27.891: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  4 13:07:29.368: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  4 13:07:33.755: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  4 13:07:38.899: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdqpz/deployments/nginx-deployment,UID:f5c33fac-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144205,Generation:3,CreationTimestamp:2020-01-04 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-04 13:07:24 +0000 UTC 2020-01-04 13:06:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-04 13:07:29 +0000 UTC 2020-01-04 13:07:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  4 13:07:39.411: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdqpz/replicasets/nginx-deployment-5c98f8fb5,UID:20d1bc1d-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144213,Generation:3,CreationTimestamp:2020-01-04 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f5c33fac-2ef2-11ea-a994-fa163e34d433 0xc001c4bb07 0xc001c4bb08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:07:39.411: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  4 13:07:39.412: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdqpz/replicasets/nginx-deployment-85ddf47c5d,UID:f5df2e97-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144203,Generation:3,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f5c33fac-2ef2-11ea-a994-fa163e34d433 0xc001c4bbe7 0xc001c4bbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  4 13:07:40.398: INFO: Pod "nginx-deployment-5c98f8fb5-2pfrj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2pfrj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-2pfrj,UID:20e3a051-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144116,Generation:0,CreationTimestamp:2020-01-04 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc6c47 0xc000cc6c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc6d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc6d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.398: INFO: Pod "nginx-deployment-5c98f8fb5-496x9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-496x9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-496x9,UID:2566b036-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144141,Generation:0,CreationTimestamp:2020-01-04 13:07:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc6ea7 0xc000cc6ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.399: INFO: Pod "nginx-deployment-5c98f8fb5-4f2x6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4f2x6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-4f2x6,UID:2ae471e1-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144198,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc71f7 0xc000cc71f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.399: INFO: Pod "nginx-deployment-5c98f8fb5-6fpnz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6fpnz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-6fpnz,UID:295bc2f3-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144165,Generation:0,CreationTimestamp:2020-01-04 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc7477 0xc000cc7478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc75a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc75c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.399: INFO: Pod "nginx-deployment-5c98f8fb5-84cth" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-84cth,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-84cth,UID:2a968cb0-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144179,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc76b7 0xc000cc76b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.400: INFO: Pod "nginx-deployment-5c98f8fb5-89gjc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-89gjc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-89gjc,UID:2ae3f930-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144199,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc7827 0xc000cc7828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc78b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.400: INFO: Pod "nginx-deployment-5c98f8fb5-8zgsf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8zgsf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-8zgsf,UID:2ae54ddf-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144197,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc7997 0xc000cc7998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.400: INFO: Pod "nginx-deployment-5c98f8fb5-g5qv9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g5qv9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-g5qv9,UID:2b58d112-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144208,Generation:0,CreationTimestamp:2020-01-04 13:07:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc7ab7 0xc000cc7ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7b20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.401: INFO: Pod "nginx-deployment-5c98f8fb5-hczmp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hczmp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-hczmp,UID:2530ac83-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144138,Generation:0,CreationTimestamp:2020-01-04 13:07:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc000cc7c67 0xc000cc7c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cc7ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cc7d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.401: INFO: Pod "nginx-deployment-5c98f8fb5-nksjj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nksjj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-nksjj,UID:20e34d9b-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144110,Generation:0,CreationTimestamp:2020-01-04 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc001b38027 0xc001b38028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.402: INFO: Pod "nginx-deployment-5c98f8fb5-r2ghl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r2ghl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-r2ghl,UID:2ae570f0-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144196,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc001b38207 0xc001b38208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.403: INFO: Pod "nginx-deployment-5c98f8fb5-s5qp9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s5qp9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-s5qp9,UID:20dc8810-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144102,Generation:0,CreationTimestamp:2020-01-04 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc001b38317 0xc001b38318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.403: INFO: Pod "nginx-deployment-5c98f8fb5-vdrj7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vdrj7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-5c98f8fb5-vdrj7,UID:2a96a399-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144181,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 20d1bc1d-2ef3-11ea-a994-fa163e34d433 0xc001b384e7 0xc001b384e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.403: INFO: Pod "nginx-deployment-85ddf47c5d-299r7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-299r7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-299r7,UID:2a9eb7ef-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144194,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38697 0xc001b38698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.403: INFO: Pod "nginx-deployment-85ddf47c5d-45h2h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-45h2h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-45h2h,UID:f604ef24-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144062,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38797 0xc001b38798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b388b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b388d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-04 13:06:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:06:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://23b7141880401d9800126c996df710fb7e39b604b847135e38c26953ebcc7d96}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.404: INFO: Pod "nginx-deployment-85ddf47c5d-9fl8l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9fl8l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-9fl8l,UID:296111fe-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144170,Generation:0,CreationTimestamp:2020-01-04 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38997 0xc001b38998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.404: INFO: Pod "nginx-deployment-85ddf47c5d-bjd5x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bjd5x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-bjd5x,UID:2a9e73a7-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144191,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38b67 0xc001b38b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.404: INFO: Pod "nginx-deployment-85ddf47c5d-bm2m7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bm2m7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-bm2m7,UID:28c5b5bf-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144220,Generation:0,CreationTimestamp:2020-01-04 13:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38cd7 0xc001b38cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.405: INFO: Pod "nginx-deployment-85ddf47c5d-g7vkf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g7vkf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-g7vkf,UID:f63f0fce-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144036,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b38e87 0xc001b38e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b38ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b38f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-04 13:06:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:07:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://617c755810c4d5a9db320eff2886071083c1be2d59c199a56580c7e5453583fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.405: INFO: Pod "nginx-deployment-85ddf47c5d-hn6bj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hn6bj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-hn6bj,UID:2a9e8d6f-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144195,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b390e7 0xc001b390e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b391c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b391e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.405: INFO: Pod "nginx-deployment-85ddf47c5d-lp8jg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lp8jg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-lp8jg,UID:29610c6d-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144169,Generation:0,CreationTimestamp:2020-01-04 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b39257 0xc001b39258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b392d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b392f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.406: INFO: Pod "nginx-deployment-85ddf47c5d-lvmdj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lvmdj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-lvmdj,UID:f60c73a6-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144058,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b39427 0xc001b39428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b394c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b395b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-04 13:06:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:06:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6e943c0fc630705b5ba1f9d5adbb1f42b07fdc75b2b8e16d56536e688be1278f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.406: INFO: Pod "nginx-deployment-85ddf47c5d-m8v7l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8v7l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-m8v7l,UID:f609baf7-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144050,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b39807 0xc001b39808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b39880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b398a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-04 13:06:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:07:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5492ec4ed3158dfe12a3345e12b89a79a1401ef55c74656251ae9e164591776b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.406: INFO: Pod "nginx-deployment-85ddf47c5d-mh6vz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mh6vz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-mh6vz,UID:28c56615-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144204,Generation:0,CreationTimestamp:2020-01-04 13:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b399c7 0xc001b399c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b39a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b39a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.406: INFO: Pod "nginx-deployment-85ddf47c5d-mwgdn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mwgdn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-mwgdn,UID:f60972ed-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144049,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc001b39b17 0xc001b39b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b39c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b39c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-04 13:06:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:07:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9999d500bd1ed3196b88e5e76318eb4e7a447f8374bf79d36a36c7000856e95c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.407: INFO: Pod "nginx-deployment-85ddf47c5d-nvqbj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nvqbj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-nvqbj,UID:28c2fe65-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144180,Generation:0,CreationTimestamp:2020-01-04 13:07:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb00a7 0xc000fb00a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb02b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb02d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-04 13:07:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.407: INFO: Pod "nginx-deployment-85ddf47c5d-qjc6j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qjc6j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-qjc6j,UID:f60ced9c-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144051,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb0677 0xc000fb0678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb06e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb0700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-04 13:06:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:07:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9727070bec8860fc0830819dc3505d34573ff07ec911379ba74633e80697780e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.407: INFO: Pod "nginx-deployment-85ddf47c5d-sflnj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sflnj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-sflnj,UID:2a9e78a2-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144183,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb07c7 0xc000fb07c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb0970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb0b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.408: INFO: Pod "nginx-deployment-85ddf47c5d-svr4p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-svr4p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-svr4p,UID:f60c9dd7-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144011,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb0bd7 0xc000fb0bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb0d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb0da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-04 13:06:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:06:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cee66417ca5eba9df598cf9d6aaa209921ea5b2290d2f79189006df11428cd80}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.408: INFO: Pod "nginx-deployment-85ddf47c5d-sx777" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sx777,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-sx777,UID:2960c9f3-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144171,Generation:0,CreationTimestamp:2020-01-04 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb1137 0xc000fb1138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb11a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb11c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.408: INFO: Pod "nginx-deployment-85ddf47c5d-tjfnn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tjfnn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-tjfnn,UID:296137ad-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144168,Generation:0,CreationTimestamp:2020-01-04 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb1377 0xc000fb1378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb13f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb1410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.409: INFO: Pod "nginx-deployment-85ddf47c5d-tm9cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tm9cf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-tm9cf,UID:2a9e7ad3-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144188,Generation:0,CreationTimestamp:2020-01-04 13:07:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb1487 0xc000fb1488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb14f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb1510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  4 13:07:40.409: INFO: Pod "nginx-deployment-85ddf47c5d-w6lmv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w6lmv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rdqpz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdqpz/pods/nginx-deployment-85ddf47c5d-w6lmv,UID:f63f2183-2ef2-11ea-a994-fa163e34d433,ResourceVersion:17144071,Generation:0,CreationTimestamp:2020-01-04 13:06:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f5df2e97-2ef2-11ea-a994-fa163e34d433 0xc000fb1777 0xc000fb1778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nbt4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nbt4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-nbt4z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000fb17e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000fb1800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:06:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-04 13:06:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:07:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://29adf218b86d4c19b2271f6d2e6c020de2a90e2661ebc3a1810c656e55f8e6d2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:07:40.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rdqpz" for this suite.
Jan  4 13:08:56.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:08:56.739: INFO: namespace: e2e-tests-deployment-rdqpz, resource: bindings, ignored listing per whitelist
Jan  4 13:08:57.671: INFO: namespace e2e-tests-deployment-rdqpz deletion completed in 1m16.69886004s

• [SLOW TEST:175.489 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:08:57.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  4 13:08:59.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144469,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 13:08:59.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144469,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  4 13:09:09.421: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144482,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  4 13:09:09.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144482,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  4 13:09:19.482: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144492,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 13:09:19.482: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144492,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  4 13:09:29.512: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144505,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 13:09:29.513: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-a,UID:5e62b081-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144505,Generation:0,CreationTimestamp:2020-01-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  4 13:09:39.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-b,UID:76e971f2-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144518,Generation:0,CreationTimestamp:2020-01-04 13:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 13:09:39.530: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-b,UID:76e971f2-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144518,Generation:0,CreationTimestamp:2020-01-04 13:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  4 13:09:49.561: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-b,UID:76e971f2-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144531,Generation:0,CreationTimestamp:2020-01-04 13:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 13:09:49.561: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v4cld,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4cld/configmaps/e2e-watch-test-configmap-b,UID:76e971f2-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144531,Generation:0,CreationTimestamp:2020-01-04 13:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:09:59.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-v4cld" for this suite.
Jan  4 13:10:06.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:10:06.431: INFO: namespace: e2e-tests-watch-v4cld, resource: bindings, ignored listing per whitelist
Jan  4 13:10:06.650: INFO: namespace e2e-tests-watch-v4cld deletion completed in 6.396995864s

• [SLOW TEST:68.978 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:10:06.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-874083db-2ef3-11ea-9996-0242ac110006
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:10:23.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-th9vg" for this suite.
Jan  4 13:10:47.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:10:47.251: INFO: namespace: e2e-tests-configmap-th9vg, resource: bindings, ignored listing per whitelist
Jan  4 13:10:47.360: INFO: namespace e2e-tests-configmap-th9vg deletion completed in 24.302689985s

• [SLOW TEST:40.710 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:10:47.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-q5tl6
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 13:10:47.579: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 13:11:40.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-q5tl6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 13:11:40.233: INFO: >>> kubeConfig: /root/.kube/config
I0104 13:11:40.358711       8 log.go:172] (0xc0003c8580) (0xc002554dc0) Create stream
I0104 13:11:40.358866       8 log.go:172] (0xc0003c8580) (0xc002554dc0) Stream added, broadcasting: 1
I0104 13:11:40.371590       8 log.go:172] (0xc0003c8580) Reply frame received for 1
I0104 13:11:40.371871       8 log.go:172] (0xc0003c8580) (0xc0024b8000) Create stream
I0104 13:11:40.371934       8 log.go:172] (0xc0003c8580) (0xc0024b8000) Stream added, broadcasting: 3
I0104 13:11:40.378654       8 log.go:172] (0xc0003c8580) Reply frame received for 3
I0104 13:11:40.378704       8 log.go:172] (0xc0003c8580) (0xc0024b80a0) Create stream
I0104 13:11:40.378717       8 log.go:172] (0xc0003c8580) (0xc0024b80a0) Stream added, broadcasting: 5
I0104 13:11:40.381020       8 log.go:172] (0xc0003c8580) Reply frame received for 5
I0104 13:11:40.620923       8 log.go:172] (0xc0003c8580) Data frame received for 3
I0104 13:11:40.620965       8 log.go:172] (0xc0024b8000) (3) Data frame handling
I0104 13:11:40.620982       8 log.go:172] (0xc0024b8000) (3) Data frame sent
I0104 13:11:40.982996       8 log.go:172] (0xc0003c8580) Data frame received for 1
I0104 13:11:40.983115       8 log.go:172] (0xc0003c8580) (0xc0024b80a0) Stream removed, broadcasting: 5
I0104 13:11:40.983157       8 log.go:172] (0xc002554dc0) (1) Data frame handling
I0104 13:11:40.983187       8 log.go:172] (0xc002554dc0) (1) Data frame sent
I0104 13:11:40.983236       8 log.go:172] (0xc0003c8580) (0xc0024b8000) Stream removed, broadcasting: 3
I0104 13:11:40.983264       8 log.go:172] (0xc0003c8580) (0xc002554dc0) Stream removed, broadcasting: 1
I0104 13:11:40.983285       8 log.go:172] (0xc0003c8580) Go away received
I0104 13:11:40.983645       8 log.go:172] (0xc0003c8580) (0xc002554dc0) Stream removed, broadcasting: 1
I0104 13:11:40.983655       8 log.go:172] (0xc0003c8580) (0xc0024b8000) Stream removed, broadcasting: 3
I0104 13:11:40.983664       8 log.go:172] (0xc0003c8580) (0xc0024b80a0) Stream removed, broadcasting: 5
Jan  4 13:11:40.983: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:11:40.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-q5tl6" for this suite.
Jan  4 13:12:07.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:12:07.468: INFO: namespace: e2e-tests-pod-network-test-q5tl6, resource: bindings, ignored listing per whitelist
Jan  4 13:12:07.498: INFO: namespace e2e-tests-pod-network-test-q5tl6 deletion completed in 26.481014723s

• [SLOW TEST:80.138 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:12:07.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:12:07.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-8ztkq" to be "success or failure"
Jan  4 13:12:07.812: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 29.443196ms
Jan  4 13:12:09.941: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158329684s
Jan  4 13:12:12.098: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315183271s
Jan  4 13:12:14.121: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339045319s
Jan  4 13:12:16.164: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38155593s
Jan  4 13:12:18.375: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592411362s
Jan  4 13:12:20.422: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.639691243s
Jan  4 13:12:22.434: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.651316081s
STEP: Saw pod success
Jan  4 13:12:22.434: INFO: Pod "downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:12:22.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:12:23.104: INFO: Waiting for pod downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006 to disappear
Jan  4 13:12:23.288: INFO: Pod downwardapi-volume-cf430ebf-2ef3-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:12:23.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8ztkq" for this suite.
Jan  4 13:12:29.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:12:29.402: INFO: namespace: e2e-tests-downward-api-8ztkq, resource: bindings, ignored listing per whitelist
Jan  4 13:12:29.478: INFO: namespace e2e-tests-downward-api-8ztkq deletion completed in 6.183145735s

• [SLOW TEST:21.980 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:12:29.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  4 13:12:29.665: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-l2vgq" to be "success or failure"
Jan  4 13:12:29.686: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.642156ms
Jan  4 13:12:32.339: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.674189526s
Jan  4 13:12:34.360: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694675765s
Jan  4 13:12:37.689: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024036688s
Jan  4 13:12:39.715: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049379291s
Jan  4 13:12:41.748: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.083299966s
Jan  4 13:12:43.760: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.094921165s
Jan  4 13:12:45.785: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.119603026s
STEP: Saw pod success
Jan  4 13:12:45.785: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  4 13:12:45.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  4 13:12:45.855: INFO: Waiting for pod pod-host-path-test to disappear
Jan  4 13:12:45.871: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:12:45.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-l2vgq" for this suite.
Jan  4 13:12:51.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:12:52.022: INFO: namespace: e2e-tests-hostpath-l2vgq, resource: bindings, ignored listing per whitelist
Jan  4 13:12:52.091: INFO: namespace e2e-tests-hostpath-l2vgq deletion completed in 6.209617687s

• [SLOW TEST:22.613 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:12:52.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  4 13:12:52.339: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:12:52.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dk5jc" for this suite.
Jan  4 13:12:58.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:12:58.788: INFO: namespace: e2e-tests-kubectl-dk5jc, resource: bindings, ignored listing per whitelist
Jan  4 13:12:58.800: INFO: namespace e2e-tests-kubectl-dk5jc deletion completed in 6.241670277s

• [SLOW TEST:6.708 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:12:58.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:12:59.092: INFO: Creating deployment "test-recreate-deployment"
Jan  4 13:12:59.118: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  4 13:12:59.146: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  4 13:13:01.164: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  4 13:13:01.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:13:03.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:13:05.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:13:07.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740379, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:13:09.178: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  4 13:13:09.207: INFO: Updating deployment test-recreate-deployment
Jan  4 13:13:09.207: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  4 13:13:09.788: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-p6r4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6r4q/deployments/test-recreate-deployment,UID:eddea535-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144951,Generation:2,CreationTimestamp:2020-01-04 13:12:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-04 13:13:09 +0000 UTC 2020-01-04 13:13:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-04 13:13:09 +0000 UTC 2020-01-04 13:12:59 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  4 13:13:09.796: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-p6r4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6r4q/replicasets/test-recreate-deployment-589c4bfd,UID:f4170c3c-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144950,Generation:1,CreationTimestamp:2020-01-04 13:13:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment eddea535-2ef3-11ea-a994-fa163e34d433 0xc002082a2f 0xc002082a40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:13:09.796: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  4 13:13:09.796: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-p6r4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6r4q/replicasets/test-recreate-deployment-5bf7f65dc,UID:ede69211-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144940,Generation:2,CreationTimestamp:2020-01-04 13:12:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment eddea535-2ef3-11ea-a994-fa163e34d433 0xc002082bb0 0xc002082bb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:13:10.177: INFO: Pod "test-recreate-deployment-589c4bfd-d9574" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-d9574,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-p6r4q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6r4q/pods/test-recreate-deployment-589c4bfd-d9574,UID:f41da40e-2ef3-11ea-a994-fa163e34d433,ResourceVersion:17144947,Generation:0,CreationTimestamp:2020-01-04 13:13:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f4170c3c-2ef3-11ea-a994-fa163e34d433 0xc00221be6f 0xc00221be80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lf7rh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lf7rh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lf7rh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00221bee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00221bf00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:13:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:13:10.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-p6r4q" for this suite.
Jan  4 13:13:21.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:13:21.422: INFO: namespace: e2e-tests-deployment-p6r4q, resource: bindings, ignored listing per whitelist
Jan  4 13:13:21.433: INFO: namespace e2e-tests-deployment-p6r4q deletion completed in 11.240989121s

• [SLOW TEST:22.633 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:13:21.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  4 13:13:21.605: INFO: PodSpec: initContainers in spec.initContainers
Jan  4 13:14:39.433: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fb499340-2ef3-11ea-9996-0242ac110006", GenerateName:"", Namespace:"e2e-tests-init-container-twrb2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-twrb2/pods/pod-init-fb499340-2ef3-11ea-9996-0242ac110006", UID:"fb49f4f6-2ef3-11ea-a994-fa163e34d433", ResourceVersion:"17145108", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713740401, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"605535846"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wmzwf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00280e7c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wmzwf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wmzwf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wmzwf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021b1f98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023c0420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027ec010)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027ec030)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027ec038), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027ec03c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740401, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740401, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740401, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713740401, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0021a9b40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00209c9a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00209ca10)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://3f78817c7862af5a1cc393eb5e951d14d7370c5171b116be91652017ce47cfb2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021a9ba0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021a9b60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:14:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-twrb2" for this suite.
Jan  4 13:15:03.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:15:03.693: INFO: namespace: e2e-tests-init-container-twrb2, resource: bindings, ignored listing per whitelist
Jan  4 13:15:03.802: INFO: namespace e2e-tests-init-container-twrb2 deletion completed in 24.355385542s

• [SLOW TEST:102.369 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:15:03.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:15:04.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-t78s4" to be "success or failure"
Jan  4 13:15:04.210: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.211205ms
Jan  4 13:15:06.227: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029424304s
Jan  4 13:15:08.253: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055744335s
Jan  4 13:15:10.291: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094233924s
Jan  4 13:15:12.344: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146676502s
Jan  4 13:15:14.544: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346747367s
STEP: Saw pod success
Jan  4 13:15:14.544: INFO: Pod "downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:15:14.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:15:14.787: INFO: Waiting for pod downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:15:14.795: INFO: Pod downwardapi-volume-386c0b8e-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:15:14.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t78s4" for this suite.
Jan  4 13:15:20.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:15:21.119: INFO: namespace: e2e-tests-downward-api-t78s4, resource: bindings, ignored listing per whitelist
Jan  4 13:15:21.221: INFO: namespace e2e-tests-downward-api-t78s4 deletion completed in 6.417833265s

• [SLOW TEST:17.419 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:15:21.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:15:21.427: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 13:15:21.530: INFO: Number of nodes with available pods: 0
Jan  4 13:15:21.530: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:22.619: INFO: Number of nodes with available pods: 0
Jan  4 13:15:22.619: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:23.566: INFO: Number of nodes with available pods: 0
Jan  4 13:15:23.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:24.666: INFO: Number of nodes with available pods: 0
Jan  4 13:15:24.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:25.559: INFO: Number of nodes with available pods: 0
Jan  4 13:15:25.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:26.697: INFO: Number of nodes with available pods: 0
Jan  4 13:15:26.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:28.394: INFO: Number of nodes with available pods: 0
Jan  4 13:15:28.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:28.586: INFO: Number of nodes with available pods: 0
Jan  4 13:15:28.586: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:29.574: INFO: Number of nodes with available pods: 0
Jan  4 13:15:29.574: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:30.607: INFO: Number of nodes with available pods: 0
Jan  4 13:15:30.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:31.566: INFO: Number of nodes with available pods: 1
Jan  4 13:15:31.566: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  4 13:15:31.636: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:32.677: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:33.663: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:34.672: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:35.663: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:36.671: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:37.666: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:38.671: INFO: Wrong image for pod: daemon-set-8vc6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 13:15:38.672: INFO: Pod daemon-set-8vc6z is not available
Jan  4 13:15:39.664: INFO: Pod daemon-set-hr7cn is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  4 13:15:39.679: INFO: Number of nodes with available pods: 0
Jan  4 13:15:39.679: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:40.705: INFO: Number of nodes with available pods: 0
Jan  4 13:15:40.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:41.710: INFO: Number of nodes with available pods: 0
Jan  4 13:15:41.710: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:42.729: INFO: Number of nodes with available pods: 0
Jan  4 13:15:42.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:44.568: INFO: Number of nodes with available pods: 0
Jan  4 13:15:44.569: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:44.867: INFO: Number of nodes with available pods: 0
Jan  4 13:15:44.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:45.758: INFO: Number of nodes with available pods: 0
Jan  4 13:15:45.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:46.898: INFO: Number of nodes with available pods: 0
Jan  4 13:15:46.898: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:47.729: INFO: Number of nodes with available pods: 0
Jan  4 13:15:47.730: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:48.707: INFO: Number of nodes with available pods: 0
Jan  4 13:15:48.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:15:49.704: INFO: Number of nodes with available pods: 1
Jan  4 13:15:49.705: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gl28w, will wait for the garbage collector to delete the pods
Jan  4 13:15:49.815: INFO: Deleting DaemonSet.extensions daemon-set took: 28.879789ms
Jan  4 13:15:50.016: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.756274ms
Jan  4 13:15:56.933: INFO: Number of nodes with available pods: 0
Jan  4 13:15:56.933: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 13:15:56.937: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gl28w/daemonsets","resourceVersion":"17145289"},"items":null}

Jan  4 13:15:56.942: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gl28w/pods","resourceVersion":"17145289"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:15:56.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gl28w" for this suite.
Jan  4 13:16:04.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:16:05.139: INFO: namespace: e2e-tests-daemonsets-gl28w, resource: bindings, ignored listing per whitelist
Jan  4 13:16:05.234: INFO: namespace e2e-tests-daemonsets-gl28w deletion completed in 8.277560874s

• [SLOW TEST:44.013 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:16:05.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-5ce9621d-2ef4-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 13:16:05.414: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-plw49" to be "success or failure"
Jan  4 13:16:05.456: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 41.219909ms
Jan  4 13:16:07.855: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440942321s
Jan  4 13:16:09.875: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46102888s
Jan  4 13:16:11.974: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559352763s
Jan  4 13:16:13.995: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580167424s
Jan  4 13:16:16.018: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.60379193s
STEP: Saw pod success
Jan  4 13:16:16.018: INFO: Pod "pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:16:16.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan  4 13:16:16.340: INFO: Waiting for pod pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:16:16.351: INFO: Pod pod-configmaps-5cea672b-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:16:16.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-plw49" for this suite.
Jan  4 13:16:22.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:16:22.632: INFO: namespace: e2e-tests-configmap-plw49, resource: bindings, ignored listing per whitelist
Jan  4 13:16:22.691: INFO: namespace e2e-tests-configmap-plw49 deletion completed in 6.328261968s

• [SLOW TEST:17.456 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:16:22.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  4 13:16:23.019: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 13:16:23.088: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 13:16:23.094: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  4 13:16:23.125: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 13:16:23.125: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 13:16:23.125: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 13:16:23.125: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 13:16:23.125: INFO: 	Container coredns ready: true, restart count 0
Jan  4 13:16:23.125: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  4 13:16:23.125: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 13:16:23.125: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  4 13:16:23.125: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  4 13:16:23.125: INFO: 	Container weave ready: true, restart count 0
Jan  4 13:16:23.125: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 13:16:23.125: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  4 13:16:23.125: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  4 13:16:23.224: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-678a7691-2ef4-11ea-9996-0242ac110006.15e6b168d58e2972], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-8m5n2/filler-pod-678a7691-2ef4-11ea-9996-0242ac110006 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-678a7691-2ef4-11ea-9996-0242ac110006.15e6b16a0f0325e6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-678a7691-2ef4-11ea-9996-0242ac110006.15e6b16abc881d53], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-678a7691-2ef4-11ea-9996-0242ac110006.15e6b16b0c4f975a], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e6b16ba458ac66], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:16:36.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8m5n2" for this suite.
Jan  4 13:16:44.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:16:44.676: INFO: namespace: e2e-tests-sched-pred-8m5n2, resource: bindings, ignored listing per whitelist
Jan  4 13:16:44.746: INFO: namespace e2e-tests-sched-pred-8m5n2 deletion completed in 8.184667209s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.055 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:16:44.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-flkrv
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-flkrv to expose endpoints map[]
Jan  4 13:16:45.995: INFO: Get endpoints failed (23.92339ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  4 13:16:47.004: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-flkrv exposes endpoints map[] (1.032414462s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-flkrv
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-flkrv to expose endpoints map[pod1:[100]]
Jan  4 13:16:51.530: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.517696454s elapsed, will retry)
Jan  4 13:16:56.895: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-flkrv exposes endpoints map[pod1:[100]] (9.88263242s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-flkrv
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-flkrv to expose endpoints map[pod1:[100] pod2:[101]]
Jan  4 13:17:01.216: INFO: Unexpected endpoints: found map[75b76282-2ef4-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (4.301592087s elapsed, will retry)
Jan  4 13:17:07.223: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-flkrv exposes endpoints map[pod1:[100] pod2:[101]] (10.308962909s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-flkrv
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-flkrv to expose endpoints map[pod2:[101]]
Jan  4 13:17:08.314: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-flkrv exposes endpoints map[pod2:[101]] (1.07772967s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-flkrv
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-flkrv to expose endpoints map[]
Jan  4 13:17:10.068: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-flkrv exposes endpoints map[] (1.713568765s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:17:10.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-flkrv" for this suite.
Jan  4 13:17:35.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:17:35.188: INFO: namespace: e2e-tests-services-flkrv, resource: bindings, ignored listing per whitelist
Jan  4 13:17:35.322: INFO: namespace e2e-tests-services-flkrv deletion completed in 24.386438501s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.575 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:17:35.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  4 13:17:35.524: INFO: Waiting up to 5m0s for pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-zgwzk" to be "success or failure"
Jan  4 13:17:35.538: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.321534ms
Jan  4 13:17:37.783: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258792531s
Jan  4 13:17:39.799: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274816867s
Jan  4 13:17:42.349: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.824287492s
Jan  4 13:17:44.483: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.959145804s
Jan  4 13:17:46.516: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.991699265s
STEP: Saw pod success
Jan  4 13:17:46.516: INFO: Pod "pod-92a067b0-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:17:46.565: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-92a067b0-2ef4-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:17:46.763: INFO: Waiting for pod pod-92a067b0-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:17:46.768: INFO: Pod pod-92a067b0-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:17:46.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zgwzk" for this suite.
Jan  4 13:17:54.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:17:54.992: INFO: namespace: e2e-tests-emptydir-zgwzk, resource: bindings, ignored listing per whitelist
Jan  4 13:17:55.027: INFO: namespace e2e-tests-emptydir-zgwzk deletion completed in 8.251987219s

• [SLOW TEST:19.705 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:17:55.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  4 13:18:07.957: INFO: Successfully updated pod "labelsupdate9e565705-2ef4-11ea-9996-0242ac110006"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:18:10.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xcs8d" for this suite.
Jan  4 13:18:36.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:18:36.232: INFO: namespace: e2e-tests-downward-api-xcs8d, resource: bindings, ignored listing per whitelist
Jan  4 13:18:36.524: INFO: namespace e2e-tests-downward-api-xcs8d deletion completed in 26.430528251s

• [SLOW TEST:41.496 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:18:36.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-b715b3ed-2ef4-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 13:18:36.731: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-t7788" to be "success or failure"
Jan  4 13:18:36.755: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 23.308776ms
Jan  4 13:18:39.320: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.588547069s
Jan  4 13:18:41.337: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605641269s
Jan  4 13:18:43.355: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624017699s
Jan  4 13:18:45.408: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676457339s
Jan  4 13:18:47.552: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.820700024s
STEP: Saw pod success
Jan  4 13:18:47.552: INFO: Pod "pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:18:47.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 13:18:48.079: INFO: Waiting for pod pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:18:48.142: INFO: Pod pod-projected-configmaps-b716849e-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:18:48.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7788" for this suite.
Jan  4 13:18:54.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:18:54.288: INFO: namespace: e2e-tests-projected-t7788, resource: bindings, ignored listing per whitelist
Jan  4 13:18:54.333: INFO: namespace e2e-tests-projected-t7788 deletion completed in 6.181019152s

• [SLOW TEST:17.808 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:18:54.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  4 13:18:54.554: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:19:12.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wl9tb" for this suite.
Jan  4 13:19:18.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:19:18.302: INFO: namespace: e2e-tests-init-container-wl9tb, resource: bindings, ignored listing per whitelist
Jan  4 13:19:18.471: INFO: namespace e2e-tests-init-container-wl9tb deletion completed in 6.289384674s

• [SLOW TEST:24.138 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:19:18.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d0533586-2ef4-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:19:19.055: INFO: Waiting up to 5m0s for pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-p9bqj" to be "success or failure"
Jan  4 13:19:19.071: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.141245ms
Jan  4 13:19:21.085: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029904326s
Jan  4 13:19:23.102: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046394325s
Jan  4 13:19:25.276: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220536921s
Jan  4 13:19:27.296: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240358183s
Jan  4 13:19:29.311: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255737595s
STEP: Saw pod success
Jan  4 13:19:29.311: INFO: Pod "pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:19:29.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:19:29.523: INFO: Waiting for pod pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:19:29.530: INFO: Pod pod-secrets-d0548e84-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:19:29.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-p9bqj" for this suite.
Jan  4 13:19:35.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:19:35.709: INFO: namespace: e2e-tests-secrets-p9bqj, resource: bindings, ignored listing per whitelist
Jan  4 13:19:35.759: INFO: namespace e2e-tests-secrets-p9bqj deletion completed in 6.218912324s

• [SLOW TEST:17.287 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:19:35.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:19:35.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-9n9pm" to be "success or failure"
Jan  4 13:19:36.015: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.752877ms
Jan  4 13:19:38.037: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04154185s
Jan  4 13:19:40.091: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095685614s
Jan  4 13:19:42.359: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36346934s
Jan  4 13:19:44.382: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386346941s
Jan  4 13:19:46.405: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409328503s
STEP: Saw pod success
Jan  4 13:19:46.405: INFO: Pod "downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:19:46.416: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:19:47.367: INFO: Waiting for pod downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:19:47.440: INFO: Pod downwardapi-volume-da6ebb29-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:19:47.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9n9pm" for this suite.
Jan  4 13:19:53.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:19:53.844: INFO: namespace: e2e-tests-downward-api-9n9pm, resource: bindings, ignored listing per whitelist
Jan  4 13:19:53.927: INFO: namespace e2e-tests-downward-api-9n9pm deletion completed in 6.457549398s

• [SLOW TEST:18.168 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:19:53.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  4 13:20:05.124: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e55e1fdb-2ef4-11ea-9996-0242ac110006"
Jan  4 13:20:05.124: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e55e1fdb-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-pods-p9v84" to be "terminated due to deadline exceeded"
Jan  4 13:20:05.142: INFO: Pod "pod-update-activedeadlineseconds-e55e1fdb-2ef4-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 18.291577ms
Jan  4 13:20:07.152: INFO: Pod "pod-update-activedeadlineseconds-e55e1fdb-2ef4-11ea-9996-0242ac110006": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02805392s
Jan  4 13:20:07.152: INFO: Pod "pod-update-activedeadlineseconds-e55e1fdb-2ef4-11ea-9996-0242ac110006" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:20:07.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-p9v84" for this suite.
Jan  4 13:20:15.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:20:15.403: INFO: namespace: e2e-tests-pods-p9v84, resource: bindings, ignored listing per whitelist
Jan  4 13:20:15.405: INFO: namespace e2e-tests-pods-p9v84 deletion completed in 8.240884443s

• [SLOW TEST:21.478 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:20:15.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  4 13:20:15.685: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145913,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 13:20:15.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145914,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  4 13:20:15.685: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145915,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  4 13:20:25.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145929,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 13:20:25.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145930,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  4 13:20:25.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tvnq4,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvnq4/configmaps/e2e-watch-test-label-changed,UID:f207b486-2ef4-11ea-a994-fa163e34d433,ResourceVersion:17145931,Generation:0,CreationTimestamp:2020-01-04 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:20:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-tvnq4" for this suite.
Jan  4 13:20:31.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:20:32.047: INFO: namespace: e2e-tests-watch-tvnq4, resource: bindings, ignored listing per whitelist
Jan  4 13:20:32.089: INFO: namespace e2e-tests-watch-tvnq4 deletion completed in 6.167576493s

• [SLOW TEST:16.684 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:20:32.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  4 13:20:32.773: INFO: Waiting up to 5m0s for pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006" in namespace "e2e-tests-var-expansion-zxgbz" to be "success or failure"
Jan  4 13:20:33.104: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 330.807775ms
Jan  4 13:20:35.137: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364238319s
Jan  4 13:20:37.151: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378015141s
Jan  4 13:20:39.661: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88819764s
Jan  4 13:20:41.711: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938535387s
Jan  4 13:20:43.727: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.953652458s
Jan  4 13:20:45.758: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.985509583s
STEP: Saw pod success
Jan  4 13:20:45.758: INFO: Pod "var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:20:45.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan  4 13:20:45.936: INFO: Waiting for pod var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006 to disappear
Jan  4 13:20:45.949: INFO: Pod var-expansion-fc3cd5a4-2ef4-11ea-9996-0242ac110006 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:20:45.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zxgbz" for this suite.
Jan  4 13:20:51.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:20:52.042: INFO: namespace: e2e-tests-var-expansion-zxgbz, resource: bindings, ignored listing per whitelist
Jan  4 13:20:52.194: INFO: namespace e2e-tests-var-expansion-zxgbz deletion completed in 6.230249214s

• [SLOW TEST:20.105 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:20:52.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  4 13:20:52.417: INFO: Waiting up to 5m0s for pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-vrrbp" to be "success or failure"
Jan  4 13:20:52.450: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 32.990284ms
Jan  4 13:20:54.487: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069802565s
Jan  4 13:20:56.526: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10853251s
Jan  4 13:20:58.566: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148756805s
Jan  4 13:21:00.581: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1633942s
Jan  4 13:21:02.636: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218577247s
STEP: Saw pod success
Jan  4 13:21:02.636: INFO: Pod "pod-07fa41b2-2ef5-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:21:02.653: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-07fa41b2-2ef5-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:21:03.158: INFO: Waiting for pod pod-07fa41b2-2ef5-11ea-9996-0242ac110006 to disappear
Jan  4 13:21:03.174: INFO: Pod pod-07fa41b2-2ef5-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:21:03.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vrrbp" for this suite.
Jan  4 13:21:09.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:21:09.324: INFO: namespace: e2e-tests-emptydir-vrrbp, resource: bindings, ignored listing per whitelist
Jan  4 13:21:09.520: INFO: namespace e2e-tests-emptydir-vrrbp deletion completed in 6.338820543s

• [SLOW TEST:17.325 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:21:09.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan  4 13:21:09.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:12.590: INFO: stderr: ""
Jan  4 13:21:12.590: INFO: stdout: "pod/pause created\n"
Jan  4 13:21:12.590: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  4 13:21:12.590: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-9ns2q" to be "running and ready"
Jan  4 13:21:12.697: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 106.455913ms
Jan  4 13:21:14.722: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131131158s
Jan  4 13:21:16.737: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146355347s
Jan  4 13:21:18.764: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173981219s
Jan  4 13:21:20.785: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195009178s
Jan  4 13:21:22.799: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.208469379s
Jan  4 13:21:22.799: INFO: Pod "pause" satisfied condition "running and ready"
Jan  4 13:21:22.799: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  4 13:21:22.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:22.993: INFO: stderr: ""
Jan  4 13:21:22.993: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  4 13:21:22.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:23.097: INFO: stderr: ""
Jan  4 13:21:23.097: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  4 13:21:23.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:23.281: INFO: stderr: ""
Jan  4 13:21:23.281: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  4 13:21:23.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:23.376: INFO: stderr: ""
Jan  4 13:21:23.376: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan  4 13:21:23.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:23.595: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 13:21:23.596: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  4 13:21:23.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-9ns2q'
Jan  4 13:21:23.745: INFO: stderr: "No resources found.\n"
Jan  4 13:21:23.745: INFO: stdout: ""
Jan  4 13:21:23.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-9ns2q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 13:21:23.900: INFO: stderr: ""
Jan  4 13:21:23.900: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:21:23.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9ns2q" for this suite.
Jan  4 13:21:32.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:21:32.123: INFO: namespace: e2e-tests-kubectl-9ns2q, resource: bindings, ignored listing per whitelist
Jan  4 13:21:32.167: INFO: namespace e2e-tests-kubectl-9ns2q deletion completed in 8.254339485s

• [SLOW TEST:22.646 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:21:32.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-wfkw8
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wfkw8 to expose endpoints map[]
Jan  4 13:21:32.891: INFO: Get endpoints failed (163.15456ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  4 13:21:33.906: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wfkw8 exposes endpoints map[] (1.178210865s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-wfkw8
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wfkw8 to expose endpoints map[pod1:[80]]
Jan  4 13:21:38.701: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.773936042s elapsed, will retry)
Jan  4 13:21:44.749: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wfkw8 exposes endpoints map[pod1:[80]] (10.821895822s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-wfkw8
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wfkw8 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  4 13:21:49.046: INFO: Unexpected endpoints: found map[20ba460b-2ef5-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.258260543s elapsed, will retry)
Jan  4 13:21:56.426: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wfkw8 exposes endpoints map[pod2:[80] pod1:[80]] (11.63775042s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-wfkw8
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wfkw8 to expose endpoints map[pod2:[80]]
Jan  4 13:21:57.648: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wfkw8 exposes endpoints map[pod2:[80]] (1.210873475s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-wfkw8
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wfkw8 to expose endpoints map[]
Jan  4 13:21:58.449: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wfkw8 exposes endpoints map[] (308.206033ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:21:59.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-wfkw8" for this suite.
Jan  4 13:22:26.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:22:26.336: INFO: namespace: e2e-tests-services-wfkw8, resource: bindings, ignored listing per whitelist
Jan  4 13:22:26.336: INFO: namespace e2e-tests-services-wfkw8 deletion completed in 26.323646424s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:54.169 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:22:26.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  4 13:22:26.768: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  4 13:22:31.782: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:22:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-cr7dt" for this suite.
Jan  4 13:22:40.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:22:40.268: INFO: namespace: e2e-tests-replication-controller-cr7dt, resource: bindings, ignored listing per whitelist
Jan  4 13:22:40.385: INFO: namespace e2e-tests-replication-controller-cr7dt deletion completed in 8.334274871s

• [SLOW TEST:14.047 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:22:40.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0104 13:22:54.991680       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:22:54.991: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:22:54.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-96m49" for this suite.
Jan  4 13:23:01.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:23:01.178: INFO: namespace: e2e-tests-gc-96m49, resource: bindings, ignored listing per whitelist
Jan  4 13:23:01.299: INFO: namespace e2e-tests-gc-96m49 deletion completed in 6.301220571s

• [SLOW TEST:20.913 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:23:01.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:23:01.409: INFO: Creating ReplicaSet my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006
Jan  4 13:23:01.441: INFO: Pod name my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006: Found 0 pods out of 1
Jan  4 13:23:06.813: INFO: Pod name my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006: Found 1 pods out of 1
Jan  4 13:23:06.814: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006" is running
Jan  4 13:23:12.858: INFO: Pod "my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006-2nfcz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 13:23:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 13:23:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 13:23:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 13:23:01 +0000 UTC Reason: Message:}])
Jan  4 13:23:12.858: INFO: Trying to dial the pod
Jan  4 13:23:17.921: INFO: Controller my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006: Got expected result from replica 1 [my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006-2nfcz]: "my-hostname-basic-54e0a85b-2ef5-11ea-9996-0242ac110006-2nfcz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:23:17.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ktzrg" for this suite.
Jan  4 13:23:25.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:23:26.052: INFO: namespace: e2e-tests-replicaset-ktzrg, resource: bindings, ignored listing per whitelist
Jan  4 13:23:26.139: INFO: namespace e2e-tests-replicaset-ktzrg deletion completed in 8.208065846s

• [SLOW TEST:24.839 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:23:26.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  4 13:23:27.327: INFO: Waiting up to 5m0s for pod "pod-64308a34-2ef5-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-tlxxv" to be "success or failure"
Jan  4 13:23:27.345: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782044ms
Jan  4 13:23:29.387: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059510461s
Jan  4 13:23:31.410: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082580055s
Jan  4 13:23:33.944: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616272384s
Jan  4 13:23:36.374: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.046153807s
Jan  4 13:23:39.595: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.267672387s
Jan  4 13:23:41.623: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.295666495s
STEP: Saw pod success
Jan  4 13:23:41.623: INFO: Pod "pod-64308a34-2ef5-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:23:42.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-64308a34-2ef5-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:23:42.813: INFO: Waiting for pod pod-64308a34-2ef5-11ea-9996-0242ac110006 to disappear
Jan  4 13:23:42.832: INFO: Pod pod-64308a34-2ef5-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:23:42.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tlxxv" for this suite.
Jan  4 13:23:49.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:23:49.143: INFO: namespace: e2e-tests-emptydir-tlxxv, resource: bindings, ignored listing per whitelist
Jan  4 13:23:49.215: INFO: namespace e2e-tests-emptydir-tlxxv deletion completed in 6.363175586s

• [SLOW TEST:23.076 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:23:49.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:24:01.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-fvnxd" for this suite.
Jan  4 13:24:59.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:24:59.955: INFO: namespace: e2e-tests-kubelet-test-fvnxd, resource: bindings, ignored listing per whitelist
Jan  4 13:25:00.522: INFO: namespace e2e-tests-kubelet-test-fvnxd deletion completed in 58.756984614s

• [SLOW TEST:71.307 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:25:00.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:25:01.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-m9tmf" to be "success or failure"
Jan  4 13:25:01.655: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 360.942523ms
Jan  4 13:25:03.694: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399503827s
Jan  4 13:25:06.736: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.441895937s
Jan  4 13:25:08.777: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.482735434s
Jan  4 13:25:11.263: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.968869122s
Jan  4 13:25:13.285: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.990636477s
Jan  4 13:25:15.301: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 14.006417392s
Jan  4 13:25:17.320: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 16.025578605s
Jan  4 13:25:20.426: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.131655289s
STEP: Saw pod success
Jan  4 13:25:20.426: INFO: Pod "downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:25:21.025: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:25:21.389: INFO: Waiting for pod downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006 to disappear
Jan  4 13:25:21.511: INFO: Pod downwardapi-volume-9c50d163-2ef5-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:25:21.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m9tmf" for this suite.
Jan  4 13:25:27.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:25:27.710: INFO: namespace: e2e-tests-projected-m9tmf, resource: bindings, ignored listing per whitelist
Jan  4 13:25:27.782: INFO: namespace e2e-tests-projected-m9tmf deletion completed in 6.261571181s

• [SLOW TEST:27.260 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:25:27.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  4 13:25:41.279: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:25:42.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-zlssf" for this suite.
Jan  4 13:26:05.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:26:05.957: INFO: namespace: e2e-tests-replicaset-zlssf, resource: bindings, ignored listing per whitelist
Jan  4 13:26:06.057: INFO: namespace e2e-tests-replicaset-zlssf deletion completed in 23.560164994s

• [SLOW TEST:38.273 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:26:06.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:26:07.383: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c3714481-2ef5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00265e472), BlockOwnerDeletion:(*bool)(0xc00265e473)}}
Jan  4 13:26:07.572: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c3480562-2ef5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00265e60a), BlockOwnerDeletion:(*bool)(0xc00265e60b)}}
Jan  4 13:26:07.627: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c351cc50-2ef5-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0010f3f92), BlockOwnerDeletion:(*bool)(0xc0010f3f93)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:26:18.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nmnrb" for this suite.
Jan  4 13:26:27.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:26:28.088: INFO: namespace: e2e-tests-gc-nmnrb, resource: bindings, ignored listing per whitelist
Jan  4 13:26:28.118: INFO: namespace e2e-tests-gc-nmnrb deletion completed in 9.218896635s

• [SLOW TEST:22.062 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:26:28.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0104 13:26:59.246730       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:26:59.246: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:26:59.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s5nk5" for this suite.
Jan  4 13:27:09.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:27:09.352: INFO: namespace: e2e-tests-gc-s5nk5, resource: bindings, ignored listing per whitelist
Jan  4 13:27:10.972: INFO: namespace e2e-tests-gc-s5nk5 deletion completed in 11.721266882s

• [SLOW TEST:42.853 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:27:10.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:27:12.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-ztcwt" to be "success or failure"
Jan  4 13:27:12.443: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 47.192468ms
Jan  4 13:27:14.629: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233698642s
Jan  4 13:27:16.670: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274973573s
Jan  4 13:27:18.742: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346849544s
Jan  4 13:27:20.761: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365142806s
Jan  4 13:27:22.873: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477115461s
Jan  4 13:27:24.882: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.486234393s
Jan  4 13:27:27.176: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.780263854s
Jan  4 13:27:29.210: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.814846172s
STEP: Saw pod success
Jan  4 13:27:29.211: INFO: Pod "downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:27:29.237: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:27:29.849: INFO: Waiting for pod downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006 to disappear
Jan  4 13:27:29.861: INFO: Pod downwardapi-volume-ea77ea3b-2ef5-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:27:29.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ztcwt" for this suite.
Jan  4 13:27:37.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:27:37.824: INFO: namespace: e2e-tests-downward-api-ztcwt, resource: bindings, ignored listing per whitelist
Jan  4 13:27:37.886: INFO: namespace e2e-tests-downward-api-ztcwt deletion completed in 8.014740002s

• [SLOW TEST:26.913 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:27:37.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  4 13:27:38.185: INFO: Waiting up to 5m0s for pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-6zgvj" to be "success or failure"
Jan  4 13:27:38.221: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 35.978127ms
Jan  4 13:27:40.674: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488882071s
Jan  4 13:27:42.694: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508763045s
Jan  4 13:27:45.212: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.026588795s
Jan  4 13:27:47.246: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060439581s
Jan  4 13:27:49.268: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.083235888s
Jan  4 13:27:51.295: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.109617995s
STEP: Saw pod success
Jan  4 13:27:51.295: INFO: Pod "downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:27:51.315: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan  4 13:27:52.499: INFO: Waiting for pod downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006 to disappear
Jan  4 13:27:52.755: INFO: Pod downward-api-f9cce2ab-2ef5-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:27:52.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6zgvj" for this suite.
Jan  4 13:27:59.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:27:59.296: INFO: namespace: e2e-tests-downward-api-6zgvj, resource: bindings, ignored listing per whitelist
Jan  4 13:27:59.311: INFO: namespace e2e-tests-downward-api-6zgvj deletion completed in 6.514697417s

• [SLOW TEST:21.425 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:27:59.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0692f6e9-2ef6-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:27:59.664: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-f8tgk" to be "success or failure"
Jan  4 13:27:59.677: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.264714ms
Jan  4 13:28:01.854: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189681739s
Jan  4 13:28:03.891: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22727907s
Jan  4 13:28:05.906: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241712838s
Jan  4 13:28:07.917: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252871543s
Jan  4 13:28:09.931: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266714379s
Jan  4 13:28:11.952: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.287456867s
STEP: Saw pod success
Jan  4 13:28:11.952: INFO: Pod "pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:28:11.962: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:28:14.221: INFO: Waiting for pod pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006 to disappear
Jan  4 13:28:14.240: INFO: Pod pod-projected-secrets-06979454-2ef6-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:28:14.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f8tgk" for this suite.
Jan  4 13:28:22.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:28:22.724: INFO: namespace: e2e-tests-projected-f8tgk, resource: bindings, ignored listing per whitelist
Jan  4 13:28:22.817: INFO: namespace e2e-tests-projected-f8tgk deletion completed in 8.35623047s

• [SLOW TEST:23.506 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:28:22.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:28:23.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-kghpz" for this suite.
Jan  4 13:28:47.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:28:47.729: INFO: namespace: e2e-tests-kubelet-test-kghpz, resource: bindings, ignored listing per whitelist
Jan  4 13:28:47.751: INFO: namespace e2e-tests-kubelet-test-kghpz deletion completed in 24.200896207s

• [SLOW TEST:24.933 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:28:47.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  4 13:28:48.109: INFO: Waiting up to 5m0s for pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006" in namespace "e2e-tests-containers-gbc72" to be "success or failure"
Jan  4 13:28:48.169: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 59.754767ms
Jan  4 13:28:50.187: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078068555s
Jan  4 13:28:52.214: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104919738s
Jan  4 13:28:54.979: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.869986119s
Jan  4 13:28:57.568: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.459205108s
Jan  4 13:28:59.600: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.491115677s
Jan  4 13:29:02.405: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.296109183s
STEP: Saw pod success
Jan  4 13:29:02.405: INFO: Pod "client-containers-2381e63b-2ef6-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:29:02.434: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-2381e63b-2ef6-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:29:03.264: INFO: Waiting for pod client-containers-2381e63b-2ef6-11ea-9996-0242ac110006 to disappear
Jan  4 13:29:03.278: INFO: Pod client-containers-2381e63b-2ef6-11ea-9996-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:29:03.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gbc72" for this suite.
Jan  4 13:29:09.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:29:09.527: INFO: namespace: e2e-tests-containers-gbc72, resource: bindings, ignored listing per whitelist
Jan  4 13:29:09.541: INFO: namespace e2e-tests-containers-gbc72 deletion completed in 6.251414609s

• [SLOW TEST:21.790 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:29:09.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-nnds
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 13:29:09.913: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nnds" in namespace "e2e-tests-subpath-z62f9" to be "success or failure"
Jan  4 13:29:09.928: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 15.436585ms
Jan  4 13:29:12.711: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798006084s
Jan  4 13:29:14.756: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 4.843072577s
Jan  4 13:29:16.823: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 6.910034394s
Jan  4 13:29:20.975: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 11.061614945s
Jan  4 13:29:22.988: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 13.074707004s
Jan  4 13:29:25.022: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 15.109312253s
Jan  4 13:29:27.256: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 17.342726032s
Jan  4 13:29:29.925: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Pending", Reason="", readiness=false. Elapsed: 20.012209848s
Jan  4 13:29:31.940: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=true. Elapsed: 22.027119374s
Jan  4 13:29:33.968: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 24.054996652s
Jan  4 13:29:35.992: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 26.079368789s
Jan  4 13:29:38.012: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 28.099571156s
Jan  4 13:29:40.035: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 30.121898588s
Jan  4 13:29:42.064: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 32.151052212s
Jan  4 13:29:44.085: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 34.171672105s
Jan  4 13:29:46.104: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 36.191436288s
Jan  4 13:29:48.115: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 38.201861746s
Jan  4 13:29:50.739: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Running", Reason="", readiness=false. Elapsed: 40.825626118s
Jan  4 13:29:52.794: INFO: Pod "pod-subpath-test-downwardapi-nnds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.880885519s
STEP: Saw pod success
Jan  4 13:29:52.794: INFO: Pod "pod-subpath-test-downwardapi-nnds" satisfied condition "success or failure"
Jan  4 13:29:52.831: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-nnds container test-container-subpath-downwardapi-nnds: 
STEP: delete the pod
Jan  4 13:29:55.975: INFO: Waiting for pod pod-subpath-test-downwardapi-nnds to disappear
Jan  4 13:29:55.996: INFO: Pod pod-subpath-test-downwardapi-nnds no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-nnds
Jan  4 13:29:55.997: INFO: Deleting pod "pod-subpath-test-downwardapi-nnds" in namespace "e2e-tests-subpath-z62f9"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:29:56.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-z62f9" for this suite.
Jan  4 13:30:04.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:30:04.431: INFO: namespace: e2e-tests-subpath-z62f9, resource: bindings, ignored listing per whitelist
Jan  4 13:30:04.664: INFO: namespace e2e-tests-subpath-z62f9 deletion completed in 8.632312026s

• [SLOW TEST:55.123 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:30:04.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 13:30:05.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:05.366: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 13:30:05.366: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  4 13:30:05.433: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  4 13:30:05.492: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  4 13:30:05.592: INFO: scanned /root for discovery docs: 
Jan  4 13:30:05.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:32.413: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  4 13:30:32.413: INFO: stdout: "Created e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac\nScaling up e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  4 13:30:32.413: INFO: stdout: "Created e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac\nScaling up e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  4 13:30:32.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:32.742: INFO: stderr: ""
Jan  4 13:30:32.742: INFO: stdout: "e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac-d5f5r e2e-test-nginx-rc-5vtzp "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  4 13:30:37.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:37.992: INFO: stderr: ""
Jan  4 13:30:37.992: INFO: stdout: "e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac-d5f5r "
Jan  4 13:30:37.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac-d5f5r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:38.117: INFO: stderr: ""
Jan  4 13:30:38.117: INFO: stdout: "true"
Jan  4 13:30:38.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac-d5f5r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:38.251: INFO: stderr: ""
Jan  4 13:30:38.251: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  4 13:30:38.251: INFO: e2e-test-nginx-rc-2fbda13ad06fa31e88d46a996f11d0ac-d5f5r is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  4 13:30:38.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ct8fz'
Jan  4 13:30:38.395: INFO: stderr: ""
Jan  4 13:30:38.395: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:30:38.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ct8fz" for this suite.
Jan  4 13:31:02.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:31:02.633: INFO: namespace: e2e-tests-kubectl-ct8fz, resource: bindings, ignored listing per whitelist
Jan  4 13:31:02.718: INFO: namespace e2e-tests-kubectl-ct8fz deletion completed in 24.305803184s

• [SLOW TEST:58.053 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:31:02.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:31:03.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-2qkbq" to be "success or failure"
Jan  4 13:31:03.517: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 156.922418ms
Jan  4 13:31:05.660: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299760303s
Jan  4 13:31:07.678: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317775612s
Jan  4 13:31:09.700: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339641811s
Jan  4 13:31:12.044: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683218837s
Jan  4 13:31:14.080: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719792988s
Jan  4 13:31:16.094: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.733496108s
STEP: Saw pod success
Jan  4 13:31:16.094: INFO: Pod "downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:31:16.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:31:16.790: INFO: Waiting for pod downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006 to disappear
Jan  4 13:31:16.797: INFO: Pod downwardapi-volume-7420041e-2ef6-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:31:16.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2qkbq" for this suite.
Jan  4 13:31:22.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:31:22.991: INFO: namespace: e2e-tests-downward-api-2qkbq, resource: bindings, ignored listing per whitelist
Jan  4 13:31:22.992: INFO: namespace e2e-tests-downward-api-2qkbq deletion completed in 6.186666751s

• [SLOW TEST:20.272 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:31:22.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  4 13:31:23.169: INFO: Waiting up to 5m0s for pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006" in namespace "e2e-tests-containers-dsrr6" to be "success or failure"
Jan  4 13:31:23.176: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.990545ms
Jan  4 13:31:25.189: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019992245s
Jan  4 13:31:27.205: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036410215s
Jan  4 13:31:29.450: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281708212s
Jan  4 13:31:31.470: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301514895s
Jan  4 13:31:33.494: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324935714s
STEP: Saw pod success
Jan  4 13:31:33.494: INFO: Pod "client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:31:33.503: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:31:33.709: INFO: Waiting for pod client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006 to disappear
Jan  4 13:31:33.762: INFO: Pod client-containers-7ff0ee21-2ef6-11ea-9996-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:31:33.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-dsrr6" for this suite.
Jan  4 13:31:39.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:31:40.018: INFO: namespace: e2e-tests-containers-dsrr6, resource: bindings, ignored listing per whitelist
Jan  4 13:31:40.135: INFO: namespace e2e-tests-containers-dsrr6 deletion completed in 6.348311932s

• [SLOW TEST:17.143 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:31:40.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 13:31:40.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jgxxn'
Jan  4 13:31:42.072: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 13:31:42.072: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  4 13:31:44.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-jgxxn'
Jan  4 13:31:44.360: INFO: stderr: ""
Jan  4 13:31:44.360: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:31:44.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jgxxn" for this suite.
Jan  4 13:31:52.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:31:52.699: INFO: namespace: e2e-tests-kubectl-jgxxn, resource: bindings, ignored listing per whitelist
Jan  4 13:31:52.731: INFO: namespace e2e-tests-kubectl-jgxxn deletion completed in 8.36242252s

• [SLOW TEST:12.596 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:31:52.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bc5gn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 13:31:52.916: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 13:32:33.477: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.5 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-bc5gn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 13:32:33.477: INFO: >>> kubeConfig: /root/.kube/config
I0104 13:32:33.552636       8 log.go:172] (0xc0028102c0) (0xc000b3fd60) Create stream
I0104 13:32:33.552731       8 log.go:172] (0xc0028102c0) (0xc000b3fd60) Stream added, broadcasting: 1
I0104 13:32:33.682118       8 log.go:172] (0xc0028102c0) Reply frame received for 1
I0104 13:32:33.682377       8 log.go:172] (0xc0028102c0) (0xc000b3fe00) Create stream
I0104 13:32:33.682402       8 log.go:172] (0xc0028102c0) (0xc000b3fe00) Stream added, broadcasting: 3
I0104 13:32:33.687534       8 log.go:172] (0xc0028102c0) Reply frame received for 3
I0104 13:32:33.687575       8 log.go:172] (0xc0028102c0) (0xc001cba1e0) Create stream
I0104 13:32:33.687589       8 log.go:172] (0xc0028102c0) (0xc001cba1e0) Stream added, broadcasting: 5
I0104 13:32:33.689518       8 log.go:172] (0xc0028102c0) Reply frame received for 5
I0104 13:32:34.926850       8 log.go:172] (0xc0028102c0) Data frame received for 3
I0104 13:32:34.927082       8 log.go:172] (0xc000b3fe00) (3) Data frame handling
I0104 13:32:34.927192       8 log.go:172] (0xc000b3fe00) (3) Data frame sent
I0104 13:32:35.080582       8 log.go:172] (0xc0028102c0) (0xc000b3fe00) Stream removed, broadcasting: 3
I0104 13:32:35.080988       8 log.go:172] (0xc0028102c0) (0xc001cba1e0) Stream removed, broadcasting: 5
I0104 13:32:35.081175       8 log.go:172] (0xc0028102c0) Data frame received for 1
I0104 13:32:35.081239       8 log.go:172] (0xc000b3fd60) (1) Data frame handling
I0104 13:32:35.081277       8 log.go:172] (0xc000b3fd60) (1) Data frame sent
I0104 13:32:35.081302       8 log.go:172] (0xc0028102c0) (0xc000b3fd60) Stream removed, broadcasting: 1
I0104 13:32:35.081322       8 log.go:172] (0xc0028102c0) Go away received
I0104 13:32:35.082221       8 log.go:172] (0xc0028102c0) (0xc000b3fd60) Stream removed, broadcasting: 1
I0104 13:32:35.082245       8 log.go:172] (0xc0028102c0) (0xc000b3fe00) Stream removed, broadcasting: 3
I0104 13:32:35.082254       8 log.go:172] (0xc0028102c0) (0xc001cba1e0) Stream removed, broadcasting: 5
Jan  4 13:32:35.082: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:32:35.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-bc5gn" for this suite.
Jan  4 13:33:01.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:33:01.481: INFO: namespace: e2e-tests-pod-network-test-bc5gn, resource: bindings, ignored listing per whitelist
Jan  4 13:33:01.501: INFO: namespace e2e-tests-pod-network-test-bc5gn deletion completed in 26.394516827s

• [SLOW TEST:68.769 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:33:01.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-bac07e84-2ef6-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 13:33:01.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006" in namespace "e2e-tests-configmap-qtjqf" to be "success or failure"
Jan  4 13:33:01.947: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 23.96875ms
Jan  4 13:33:04.029: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10635413s
Jan  4 13:33:06.166: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243222454s
Jan  4 13:33:08.381: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458264337s
Jan  4 13:33:10.794: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870788876s
Jan  4 13:33:12.836: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.91343755s
STEP: Saw pod success
Jan  4 13:33:12.836: INFO: Pod "pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:33:12.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan  4 13:33:13.026: INFO: Waiting for pod pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006 to disappear
Jan  4 13:33:13.038: INFO: Pod pod-configmaps-bac167fa-2ef6-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:33:13.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qtjqf" for this suite.
Jan  4 13:33:19.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:33:19.213: INFO: namespace: e2e-tests-configmap-qtjqf, resource: bindings, ignored listing per whitelist
Jan  4 13:33:19.278: INFO: namespace e2e-tests-configmap-qtjqf deletion completed in 6.23085957s

• [SLOW TEST:17.777 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:33:19.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-d88p
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 13:33:19.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-d88p" in namespace "e2e-tests-subpath-gxh9g" to be "success or failure"
Jan  4 13:33:19.600: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 28.818373ms
Jan  4 13:33:21.660: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08827034s
Jan  4 13:33:23.694: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122261475s
Jan  4 13:33:25.710: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138779156s
Jan  4 13:33:27.912: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340723349s
Jan  4 13:33:29.924: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352433568s
Jan  4 13:33:31.947: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.375567106s
Jan  4 13:33:33.966: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.394698728s
Jan  4 13:33:35.976: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.404834612s
Jan  4 13:33:38.073: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.501725929s
Jan  4 13:33:40.227: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 20.655212898s
Jan  4 13:33:42.251: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 22.679611545s
Jan  4 13:33:44.269: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 24.697005779s
Jan  4 13:33:46.294: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 26.722310271s
Jan  4 13:33:48.310: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 28.738483429s
Jan  4 13:33:50.334: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 30.762481009s
Jan  4 13:33:52.353: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 32.781608986s
Jan  4 13:33:54.419: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 34.847675974s
Jan  4 13:33:56.466: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Running", Reason="", readiness=false. Elapsed: 36.894527553s
Jan  4 13:33:58.632: INFO: Pod "pod-subpath-test-secret-d88p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.06052719s
STEP: Saw pod success
Jan  4 13:33:58.632: INFO: Pod "pod-subpath-test-secret-d88p" satisfied condition "success or failure"
Jan  4 13:33:58.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-d88p container test-container-subpath-secret-d88p: 
STEP: delete the pod
Jan  4 13:33:58.805: INFO: Waiting for pod pod-subpath-test-secret-d88p to disappear
Jan  4 13:33:58.814: INFO: Pod pod-subpath-test-secret-d88p no longer exists
STEP: Deleting pod pod-subpath-test-secret-d88p
Jan  4 13:33:58.814: INFO: Deleting pod "pod-subpath-test-secret-d88p" in namespace "e2e-tests-subpath-gxh9g"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:33:58.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gxh9g" for this suite.
Jan  4 13:34:06.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:34:06.967: INFO: namespace: e2e-tests-subpath-gxh9g, resource: bindings, ignored listing per whitelist
Jan  4 13:34:07.147: INFO: namespace e2e-tests-subpath-gxh9g deletion completed in 8.319401926s

• [SLOW TEST:47.869 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:34:07.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 13:34:07.665: INFO: Number of nodes with available pods: 0
Jan  4 13:34:07.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:08.900: INFO: Number of nodes with available pods: 0
Jan  4 13:34:08.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:10.144: INFO: Number of nodes with available pods: 0
Jan  4 13:34:10.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:10.725: INFO: Number of nodes with available pods: 0
Jan  4 13:34:10.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:12.069: INFO: Number of nodes with available pods: 0
Jan  4 13:34:12.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:12.685: INFO: Number of nodes with available pods: 0
Jan  4 13:34:12.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:13.689: INFO: Number of nodes with available pods: 0
Jan  4 13:34:13.689: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:14.741: INFO: Number of nodes with available pods: 0
Jan  4 13:34:14.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:15.691: INFO: Number of nodes with available pods: 0
Jan  4 13:34:15.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:17.168: INFO: Number of nodes with available pods: 0
Jan  4 13:34:17.168: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:18.507: INFO: Number of nodes with available pods: 0
Jan  4 13:34:18.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:18.832: INFO: Number of nodes with available pods: 0
Jan  4 13:34:18.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:19.705: INFO: Number of nodes with available pods: 1
Jan  4 13:34:19.705: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  4 13:34:19.829: INFO: Number of nodes with available pods: 0
Jan  4 13:34:19.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:21.686: INFO: Number of nodes with available pods: 0
Jan  4 13:34:21.686: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:22.457: INFO: Number of nodes with available pods: 0
Jan  4 13:34:22.457: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:23.657: INFO: Number of nodes with available pods: 0
Jan  4 13:34:23.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:24.021: INFO: Number of nodes with available pods: 0
Jan  4 13:34:24.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:24.888: INFO: Number of nodes with available pods: 0
Jan  4 13:34:24.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:26.611: INFO: Number of nodes with available pods: 0
Jan  4 13:34:26.611: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:26.885: INFO: Number of nodes with available pods: 0
Jan  4 13:34:26.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:27.931: INFO: Number of nodes with available pods: 0
Jan  4 13:34:27.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:29.002: INFO: Number of nodes with available pods: 0
Jan  4 13:34:29.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:29.870: INFO: Number of nodes with available pods: 0
Jan  4 13:34:29.870: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:31.236: INFO: Number of nodes with available pods: 0
Jan  4 13:34:31.236: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:32.078: INFO: Number of nodes with available pods: 0
Jan  4 13:34:32.078: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:33.077: INFO: Number of nodes with available pods: 0
Jan  4 13:34:33.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:33.997: INFO: Number of nodes with available pods: 0
Jan  4 13:34:33.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:34:34.928: INFO: Number of nodes with available pods: 1
Jan  4 13:34:34.928: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-w4dbd, will wait for the garbage collector to delete the pods
Jan  4 13:34:35.019: INFO: Deleting DaemonSet.extensions daemon-set took: 17.764367ms
Jan  4 13:34:35.419: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.350222ms
Jan  4 13:34:45.634: INFO: Number of nodes with available pods: 0
Jan  4 13:34:45.634: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 13:34:45.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-w4dbd/daemonsets","resourceVersion":"17147818"},"items":null}

Jan  4 13:34:45.647: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-w4dbd/pods","resourceVersion":"17147818"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:34:45.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-w4dbd" for this suite.
Jan  4 13:34:53.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:34:53.854: INFO: namespace: e2e-tests-daemonsets-w4dbd, resource: bindings, ignored listing per whitelist
Jan  4 13:34:54.022: INFO: namespace e2e-tests-daemonsets-w4dbd deletion completed in 8.349971423s

• [SLOW TEST:46.875 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:34:54.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:35:16.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mfb4v" for this suite.
Jan  4 13:35:23.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:35:23.525: INFO: namespace: e2e-tests-kubelet-test-mfb4v, resource: bindings, ignored listing per whitelist
Jan  4 13:35:23.728: INFO: namespace e2e-tests-kubelet-test-mfb4v deletion completed in 6.938028862s

• [SLOW TEST:29.706 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:35:23.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0f8dafb3-2ef7-11ea-9996-0242ac110006
STEP: Creating secret with name s-test-opt-upd-0f8db1bb-2ef7-11ea-9996-0242ac110006
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0f8dafb3-2ef7-11ea-9996-0242ac110006
STEP: Updating secret s-test-opt-upd-0f8db1bb-2ef7-11ea-9996-0242ac110006
STEP: Creating secret with name s-test-opt-create-0f8db20e-2ef7-11ea-9996-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:36:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dgr45" for this suite.
Jan  4 13:37:25.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:37:25.795: INFO: namespace: e2e-tests-projected-dgr45, resource: bindings, ignored listing per whitelist
Jan  4 13:37:25.892: INFO: namespace e2e-tests-projected-dgr45 deletion completed in 28.21613402s

• [SLOW TEST:122.163 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:37:25.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-5857a957-2ef7-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:37:26.382: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-59n9d" to be "success or failure"
Jan  4 13:37:26.401: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.150963ms
Jan  4 13:37:28.418: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035918228s
Jan  4 13:37:30.544: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161891184s
Jan  4 13:37:32.570: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187716072s
Jan  4 13:37:36.210: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.828171946s
Jan  4 13:37:38.233: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.850255273s
Jan  4 13:37:40.260: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.878005019s
Jan  4 13:37:42.287: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.904764718s
STEP: Saw pod success
Jan  4 13:37:42.287: INFO: Pod "pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:37:42.293: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:37:44.895: INFO: Waiting for pod pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006 to disappear
Jan  4 13:37:45.058: INFO: Pod pod-projected-secrets-5858cd06-2ef7-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:37:45.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-59n9d" for this suite.
Jan  4 13:37:51.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:37:51.489: INFO: namespace: e2e-tests-projected-59n9d, resource: bindings, ignored listing per whitelist
Jan  4 13:37:51.697: INFO: namespace e2e-tests-projected-59n9d deletion completed in 6.625676152s

• [SLOW TEST:25.805 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:37:51.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:37:51.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-ms4bz" to be "success or failure"
Jan  4 13:37:52.096: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 133.786587ms
Jan  4 13:37:54.630: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.667753875s
Jan  4 13:37:56.676: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.714358649s
Jan  4 13:37:59.073: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.110558418s
Jan  4 13:38:01.087: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.125041812s
Jan  4 13:38:03.257: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.295081677s
Jan  4 13:38:06.814: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.852163836s
STEP: Saw pod success
Jan  4 13:38:06.814: INFO: Pod "downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:38:06.836: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:38:07.096: INFO: Waiting for pod downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006 to disappear
Jan  4 13:38:07.301: INFO: Pod downwardapi-volume-67adf6e0-2ef7-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:38:07.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ms4bz" for this suite.
Jan  4 13:38:13.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:38:13.533: INFO: namespace: e2e-tests-downward-api-ms4bz, resource: bindings, ignored listing per whitelist
Jan  4 13:38:13.633: INFO: namespace e2e-tests-downward-api-ms4bz deletion completed in 6.316090359s

• [SLOW TEST:21.935 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:38:13.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  4 13:38:13.949: INFO: Waiting up to 5m0s for pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-q8tlt" to be "success or failure"
Jan  4 13:38:13.984: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 35.157066ms
Jan  4 13:38:17.656: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.707459827s
Jan  4 13:38:19.693: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.744243637s
Jan  4 13:38:21.711: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.762097406s
Jan  4 13:38:23.723: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.773981901s
Jan  4 13:38:25.743: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.794086965s
Jan  4 13:38:27.756: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.807566908s
STEP: Saw pod success
Jan  4 13:38:27.757: INFO: Pod "pod-74b5c0fe-2ef7-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:38:27.761: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-74b5c0fe-2ef7-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:38:28.798: INFO: Waiting for pod pod-74b5c0fe-2ef7-11ea-9996-0242ac110006 to disappear
Jan  4 13:38:29.113: INFO: Pod pod-74b5c0fe-2ef7-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:38:29.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q8tlt" for this suite.
Jan  4 13:38:37.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:38:37.700: INFO: namespace: e2e-tests-emptydir-q8tlt, resource: bindings, ignored listing per whitelist
Jan  4 13:38:37.782: INFO: namespace e2e-tests-emptydir-q8tlt deletion completed in 8.623528379s

• [SLOW TEST:24.148 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:38:37.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  4 13:39:04.537: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:04.559: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 13:39:06.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:06.998: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 13:39:08.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:08.591: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 13:39:10.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:10.589: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 13:39:12.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:12.584: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 13:39:14.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 13:39:14.614: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:39:14.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zjqnk" for this suite.
Jan  4 13:39:38.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:39:38.944: INFO: namespace: e2e-tests-container-lifecycle-hook-zjqnk, resource: bindings, ignored listing per whitelist
Jan  4 13:39:38.998: INFO: namespace e2e-tests-container-lifecycle-hook-zjqnk deletion completed in 24.335870367s

• [SLOW TEST:61.215 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:39:38.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  4 13:39:50.180: INFO: Successfully updated pod "labelsupdatea7b95588-2ef7-11ea-9996-0242ac110006"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:39:52.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9jzc" for this suite.
Jan  4 13:40:16.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:40:16.636: INFO: namespace: e2e-tests-projected-b9jzc, resource: bindings, ignored listing per whitelist
Jan  4 13:40:16.676: INFO: namespace e2e-tests-projected-b9jzc deletion completed in 24.295941015s

• [SLOW TEST:37.677 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:40:16.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-be097333-2ef7-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 13:40:16.950: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-w2qn8" to be "success or failure"
Jan  4 13:40:16.967: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.508356ms
Jan  4 13:40:19.469: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519310872s
Jan  4 13:40:21.484: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534071301s
Jan  4 13:40:23.497: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547333351s
Jan  4 13:40:25.509: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559012554s
Jan  4 13:40:27.522: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.571959007s
STEP: Saw pod success
Jan  4 13:40:27.522: INFO: Pod "pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:40:27.529: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 13:40:28.020: INFO: Waiting for pod pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006 to disappear
Jan  4 13:40:28.293: INFO: Pod pod-projected-configmaps-be0a59e0-2ef7-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:40:28.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w2qn8" for this suite.
Jan  4 13:40:36.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:40:37.030: INFO: namespace: e2e-tests-projected-w2qn8, resource: bindings, ignored listing per whitelist
Jan  4 13:40:37.056: INFO: namespace e2e-tests-projected-w2qn8 deletion completed in 8.711528725s

• [SLOW TEST:20.380 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:40:37.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:40:37.702: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  4 13:40:37.711: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-q7cvl/daemonsets","resourceVersion":"17148470"},"items":null}

Jan  4 13:40:37.718: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-q7cvl/pods","resourceVersion":"17148470"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:40:37.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-q7cvl" for this suite.
Jan  4 13:40:43.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:40:44.026: INFO: namespace: e2e-tests-daemonsets-q7cvl, resource: bindings, ignored listing per whitelist
Jan  4 13:40:44.184: INFO: namespace e2e-tests-daemonsets-q7cvl deletion completed in 6.430313234s

S [SKIPPING] [7.127 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  4 13:40:37.702: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:40:44.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-ce75dce0-2ef7-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:40:44.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-9j8bn" to be "success or failure"
Jan  4 13:40:44.424: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279279ms
Jan  4 13:40:46.475: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058936113s
Jan  4 13:40:48.526: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109971445s
Jan  4 13:40:50.562: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145217563s
Jan  4 13:40:53.037: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.62085131s
Jan  4 13:40:55.723: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.306366333s
Jan  4 13:40:57.749: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.333005635s
Jan  4 13:41:00.046: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.630130933s
Jan  4 13:41:02.121: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.704848457s
STEP: Saw pod success
Jan  4 13:41:02.121: INFO: Pod "pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:41:02.150: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:41:02.695: INFO: Waiting for pod pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006 to disappear
Jan  4 13:41:02.718: INFO: Pod pod-projected-secrets-ce76e145-2ef7-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:41:02.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9j8bn" for this suite.
Jan  4 13:41:10.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:41:11.061: INFO: namespace: e2e-tests-projected-9j8bn, resource: bindings, ignored listing per whitelist
Jan  4 13:41:11.062: INFO: namespace e2e-tests-projected-9j8bn deletion completed in 8.330294724s

• [SLOW TEST:26.877 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:41:11.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fx5gr
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 13:41:11.711: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 13:41:58.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-fx5gr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 13:41:58.591: INFO: >>> kubeConfig: /root/.kube/config
I0104 13:41:58.679575       8 log.go:172] (0xc000957970) (0xc002673720) Create stream
I0104 13:41:58.679696       8 log.go:172] (0xc000957970) (0xc002673720) Stream added, broadcasting: 1
I0104 13:41:58.688561       8 log.go:172] (0xc000957970) Reply frame received for 1
I0104 13:41:58.688614       8 log.go:172] (0xc000957970) (0xc001423180) Create stream
I0104 13:41:58.688627       8 log.go:172] (0xc000957970) (0xc001423180) Stream added, broadcasting: 3
I0104 13:41:58.692363       8 log.go:172] (0xc000957970) Reply frame received for 3
I0104 13:41:58.692444       8 log.go:172] (0xc000957970) (0xc0029cc8c0) Create stream
I0104 13:41:58.692455       8 log.go:172] (0xc000957970) (0xc0029cc8c0) Stream added, broadcasting: 5
I0104 13:41:58.695249       8 log.go:172] (0xc000957970) Reply frame received for 5
I0104 13:41:58.825533       8 log.go:172] (0xc000957970) Data frame received for 3
I0104 13:41:58.825654       8 log.go:172] (0xc001423180) (3) Data frame handling
I0104 13:41:58.825694       8 log.go:172] (0xc001423180) (3) Data frame sent
I0104 13:41:59.031264       8 log.go:172] (0xc000957970) Data frame received for 1
I0104 13:41:59.031348       8 log.go:172] (0xc002673720) (1) Data frame handling
I0104 13:41:59.031379       8 log.go:172] (0xc002673720) (1) Data frame sent
I0104 13:41:59.031559       8 log.go:172] (0xc000957970) (0xc002673720) Stream removed, broadcasting: 1
I0104 13:41:59.032036       8 log.go:172] (0xc000957970) (0xc001423180) Stream removed, broadcasting: 3
I0104 13:41:59.033192       8 log.go:172] (0xc000957970) (0xc0029cc8c0) Stream removed, broadcasting: 5
I0104 13:41:59.033241       8 log.go:172] (0xc000957970) (0xc002673720) Stream removed, broadcasting: 1
I0104 13:41:59.033248       8 log.go:172] (0xc000957970) (0xc001423180) Stream removed, broadcasting: 3
I0104 13:41:59.033255       8 log.go:172] (0xc000957970) (0xc0029cc8c0) Stream removed, broadcasting: 5
Jan  4 13:41:59.033: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:41:59.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-fx5gr" for this suite.
Jan  4 13:42:35.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:42:36.056: INFO: namespace: e2e-tests-pod-network-test-fx5gr, resource: bindings, ignored listing per whitelist
Jan  4 13:42:36.982: INFO: namespace e2e-tests-pod-network-test-fx5gr deletion completed in 37.907708095s

• [SLOW TEST:85.919 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:42:36.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-11c04ffa-2ef8-11ea-9996-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan  4 13:42:37.337: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-rkgbg" to be "success or failure"
Jan  4 13:42:37.478: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 141.003892ms
Jan  4 13:42:40.433: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09566756s
Jan  4 13:42:42.488: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.150083319s
Jan  4 13:42:44.552: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.214376203s
Jan  4 13:42:46.880: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.542983381s
Jan  4 13:42:49.354: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.016576458s
Jan  4 13:42:52.051: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.71309321s
Jan  4 13:42:54.076: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.73853143s
STEP: Saw pod success
Jan  4 13:42:54.076: INFO: Pod "pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:42:54.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 13:42:57.157: INFO: Waiting for pod pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006 to disappear
Jan  4 13:42:57.175: INFO: Pod pod-projected-configmaps-11c41a01-2ef8-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:42:57.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rkgbg" for this suite.
Jan  4 13:43:03.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:43:03.375: INFO: namespace: e2e-tests-projected-rkgbg, resource: bindings, ignored listing per whitelist
Jan  4 13:43:03.609: INFO: namespace e2e-tests-projected-rkgbg deletion completed in 6.415643219s

• [SLOW TEST:26.626 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:43:03.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  4 13:43:03.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  4 13:43:06.005: INFO: stderr: ""
Jan  4 13:43:06.005: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:43:06.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8rc4b" for this suite.
Jan  4 13:43:12.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:43:12.246: INFO: namespace: e2e-tests-kubectl-8rc4b, resource: bindings, ignored listing per whitelist
Jan  4 13:43:12.426: INFO: namespace e2e-tests-kubectl-8rc4b deletion completed in 6.411091137s

• [SLOW TEST:8.816 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:43:12.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  4 13:43:12.670: INFO: Waiting up to 5m0s for pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006" in namespace "e2e-tests-containers-sztj7" to be "success or failure"
Jan  4 13:43:12.682: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.270456ms
Jan  4 13:43:15.390: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719741534s
Jan  4 13:43:17.438: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767357283s
Jan  4 13:43:19.459: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788868791s
Jan  4 13:43:22.565: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.894177601s
Jan  4 13:43:24.589: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.918747222s
Jan  4 13:43:26.739: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.068402092s
Jan  4 13:43:30.413: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.742446531s
Jan  4 13:43:32.439: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.768264843s
STEP: Saw pod success
Jan  4 13:43:32.439: INFO: Pod "client-containers-26d4e953-2ef8-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:43:32.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-26d4e953-2ef8-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:43:32.591: INFO: Waiting for pod client-containers-26d4e953-2ef8-11ea-9996-0242ac110006 to disappear
Jan  4 13:43:32.608: INFO: Pod client-containers-26d4e953-2ef8-11ea-9996-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:43:32.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-sztj7" for this suite.
Jan  4 13:43:40.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:43:40.810: INFO: namespace: e2e-tests-containers-sztj7, resource: bindings, ignored listing per whitelist
Jan  4 13:43:40.943: INFO: namespace e2e-tests-containers-sztj7 deletion completed in 8.306227335s

• [SLOW TEST:28.516 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:43:40.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  4 13:43:41.666: INFO: Waiting up to 5m0s for pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj" in namespace "e2e-tests-svcaccounts-776wq" to be "success or failure"
Jan  4 13:43:41.681: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.181346ms
Jan  4 13:43:43.693: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025976422s
Jan  4 13:43:45.755: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088029518s
Jan  4 13:43:48.158: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.491327484s
Jan  4 13:43:50.175: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507973684s
Jan  4 13:43:52.208: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541419132s
Jan  4 13:43:54.796: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.12984316s
Jan  4 13:43:56.831: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.164791526s
Jan  4 13:43:58.879: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.212317417s
STEP: Saw pod success
Jan  4 13:43:58.879: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj" satisfied condition "success or failure"
Jan  4 13:43:58.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj container token-test: 
STEP: delete the pod
Jan  4 13:43:59.083: INFO: Waiting for pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj to disappear
Jan  4 13:43:59.096: INFO: Pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-qnnqj no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  4 13:43:59.130: INFO: Waiting up to 5m0s for pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn" in namespace "e2e-tests-svcaccounts-776wq" to be "success or failure"
Jan  4 13:43:59.382: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 251.350189ms
Jan  4 13:44:02.035: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90480042s
Jan  4 13:44:04.079: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.948388179s
Jan  4 13:44:06.338: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.207833659s
Jan  4 13:44:08.355: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224785409s
Jan  4 13:44:10.629: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.498357676s
Jan  4 13:44:12.728: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.597831802s
Jan  4 13:44:14.747: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Pending", Reason="", readiness=false. Elapsed: 15.616653847s
Jan  4 13:44:16.761: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.630605721s
STEP: Saw pod success
Jan  4 13:44:16.761: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn" satisfied condition "success or failure"
Jan  4 13:44:16.779: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn container root-ca-test: 
STEP: delete the pod
Jan  4 13:44:17.119: INFO: Waiting for pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn to disappear
Jan  4 13:44:17.145: INFO: Pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-8srmn no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  4 13:44:17.280: INFO: Waiting up to 5m0s for pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz" in namespace "e2e-tests-svcaccounts-776wq" to be "success or failure"
Jan  4 13:44:17.298: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.41429ms
Jan  4 13:44:19.438: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157596088s
Jan  4 13:44:21.451: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170770562s
Jan  4 13:44:23.462: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1818604s
Jan  4 13:44:25.780: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499514215s
Jan  4 13:44:27.821: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541236007s
Jan  4 13:44:30.865: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.584311125s
Jan  4 13:44:33.007: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 15.726946169s
Jan  4 13:44:35.022: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.742133552s
Jan  4 13:44:37.727: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.446811633s
Jan  4 13:44:39.760: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.479679166s
Jan  4 13:44:41.791: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Pending", Reason="", readiness=false. Elapsed: 24.510567185s
Jan  4 13:44:43.886: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.605411378s
STEP: Saw pod success
Jan  4 13:44:43.886: INFO: Pod "pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz" satisfied condition "success or failure"
Jan  4 13:44:43.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz container namespace-test: 
STEP: delete the pod
Jan  4 13:44:46.454: INFO: Waiting for pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz to disappear
Jan  4 13:44:46.472: INFO: Pod pod-service-account-381cea10-2ef8-11ea-9996-0242ac110006-bgqrz no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:44:46.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-776wq" for this suite.
Jan  4 13:44:54.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:44:55.056: INFO: namespace: e2e-tests-svcaccounts-776wq, resource: bindings, ignored listing per whitelist
Jan  4 13:44:55.257: INFO: namespace e2e-tests-svcaccounts-776wq deletion completed in 8.772305672s

• [SLOW TEST:74.314 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:44:55.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:44:55.833: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  4 13:44:56.074: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  4 13:45:01.096: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  4 13:45:13.118: INFO: Creating deployment "test-rolling-update-deployment"
Jan  4 13:45:13.146: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  4 13:45:13.160: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  4 13:45:17.710: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  4 13:45:17.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:19.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:21.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:23.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:26.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:27.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713742313, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 13:45:29.775: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  4 13:45:29.811: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-qs7cp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qs7cp/deployments/test-rolling-update-deployment,UID:6ea3ee4b-2ef8-11ea-a994-fa163e34d433,ResourceVersion:17149051,Generation:1,CreationTimestamp:2020-01-04 13:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 13:45:13 +0000 UTC 2020-01-04 13:45:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 13:45:29 +0000 UTC 2020-01-04 13:45:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  4 13:45:29.818: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-qs7cp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qs7cp/replicasets/test-rolling-update-deployment-75db98fb4c,UID:6eadac43-2ef8-11ea-a994-fa163e34d433,ResourceVersion:17149042,Generation:1,CreationTimestamp:2020-01-04 13:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ea3ee4b-2ef8-11ea-a994-fa163e34d433 0xc00182a277 0xc00182a278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  4 13:45:29.818: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  4 13:45:29.818: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-qs7cp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qs7cp/replicasets/test-rolling-update-controller,UID:6455d6e6-2ef8-11ea-a994-fa163e34d433,ResourceVersion:17149050,Generation:2,CreationTimestamp:2020-01-04 13:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6ea3ee4b-2ef8-11ea-a994-fa163e34d433 0xc00182a19f 0xc00182a1b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 13:45:29.831: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wk8fk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wk8fk,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-qs7cp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qs7cp/pods/test-rolling-update-deployment-75db98fb4c-wk8fk,UID:6ec62957-2ef8-11ea-a994-fa163e34d433,ResourceVersion:17149041,Generation:0,CreationTimestamp:2020-01-04 13:45:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 6eadac43-2ef8-11ea-a994-fa163e34d433 0xc00182abc7 0xc00182abc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ggw2j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ggw2j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ggw2j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00182ac30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00182ac50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:45:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:45:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:45:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:45:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-04 13:45:13 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 13:45:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3a036c15a54e5ab9626582c715d9d5dfcee607e9efce23327e393e822a611c12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:45:29.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qs7cp" for this suite.
Jan  4 13:45:39.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:45:39.893: INFO: namespace: e2e-tests-deployment-qs7cp, resource: bindings, ignored listing per whitelist
Jan  4 13:45:40.690: INFO: namespace e2e-tests-deployment-qs7cp deletion completed in 10.848045216s

• [SLOW TEST:45.433 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:45:40.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0104 13:45:49.319713       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:45:49.319: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:45:49.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-r4trb" for this suite.
Jan  4 13:45:57.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:45:58.041: INFO: namespace: e2e-tests-gc-r4trb, resource: bindings, ignored listing per whitelist
Jan  4 13:45:58.082: INFO: namespace e2e-tests-gc-r4trb deletion completed in 8.693620038s

• [SLOW TEST:17.391 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:45:58.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  4 13:46:11.595: INFO: 10 pods remaining
Jan  4 13:46:11.595: INFO: 10 pods has nil DeletionTimestamp
Jan  4 13:46:11.595: INFO: 
Jan  4 13:46:12.410: INFO: 10 pods remaining
Jan  4 13:46:12.410: INFO: 10 pods has nil DeletionTimestamp
Jan  4 13:46:12.410: INFO: 
Jan  4 13:46:14.919: INFO: 10 pods remaining
Jan  4 13:46:14.919: INFO: 8 pods has nil DeletionTimestamp
Jan  4 13:46:14.919: INFO: 
Jan  4 13:46:20.036: INFO: 8 pods remaining
Jan  4 13:46:20.036: INFO: 0 pods has nil DeletionTimestamp
Jan  4 13:46:20.036: INFO: 
Jan  4 13:46:21.427: INFO: 0 pods remaining
Jan  4 13:46:21.427: INFO: 0 pods has nil DeletionTimestamp
Jan  4 13:46:21.427: INFO: 
STEP: Gathering metrics
W0104 13:46:22.983895       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 13:46:22.984: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:46:22.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-n94np" for this suite.
Jan  4 13:46:45.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:46:45.886: INFO: namespace: e2e-tests-gc-n94np, resource: bindings, ignored listing per whitelist
Jan  4 13:46:46.531: INFO: namespace e2e-tests-gc-n94np deletion completed in 23.52290288s

• [SLOW TEST:48.450 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:46:46.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  4 13:46:48.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:46:48.997: INFO: stderr: ""
Jan  4 13:46:48.997: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:46:48.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:46:49.265: INFO: stderr: ""
Jan  4 13:46:49.265: INFO: stdout: "update-demo-nautilus-779hp update-demo-nautilus-rzsw7 "
Jan  4 13:46:49.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:46:49.522: INFO: stderr: ""
Jan  4 13:46:49.522: INFO: stdout: ""
Jan  4 13:46:49.522: INFO: update-demo-nautilus-779hp is created but not running
Jan  4 13:46:54.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:46:54.924: INFO: stderr: ""
Jan  4 13:46:54.924: INFO: stdout: "update-demo-nautilus-779hp update-demo-nautilus-rzsw7 "
Jan  4 13:46:54.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:46:55.269: INFO: stderr: ""
Jan  4 13:46:55.269: INFO: stdout: ""
Jan  4 13:46:55.269: INFO: update-demo-nautilus-779hp is created but not running
Jan  4 13:47:00.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:00.416: INFO: stderr: ""
Jan  4 13:47:00.416: INFO: stdout: "update-demo-nautilus-779hp update-demo-nautilus-rzsw7 "
Jan  4 13:47:00.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:00.535: INFO: stderr: ""
Jan  4 13:47:00.535: INFO: stdout: ""
Jan  4 13:47:00.535: INFO: update-demo-nautilus-779hp is created but not running
Jan  4 13:47:05.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:07.787: INFO: stderr: ""
Jan  4 13:47:07.787: INFO: stdout: "update-demo-nautilus-779hp update-demo-nautilus-rzsw7 "
Jan  4 13:47:07.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:07.952: INFO: stderr: ""
Jan  4 13:47:07.952: INFO: stdout: ""
Jan  4 13:47:07.952: INFO: update-demo-nautilus-779hp is created but not running
Jan  4 13:47:12.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:13.201: INFO: stderr: ""
Jan  4 13:47:13.201: INFO: stdout: "update-demo-nautilus-779hp update-demo-nautilus-rzsw7 "
Jan  4 13:47:13.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:13.337: INFO: stderr: ""
Jan  4 13:47:13.337: INFO: stdout: "true"
Jan  4 13:47:13.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-779hp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:13.489: INFO: stderr: ""
Jan  4 13:47:13.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:47:13.489: INFO: validating pod update-demo-nautilus-779hp
Jan  4 13:47:13.550: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:47:13.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:47:13.550: INFO: update-demo-nautilus-779hp is verified up and running
Jan  4 13:47:13.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzsw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:13.662: INFO: stderr: ""
Jan  4 13:47:13.662: INFO: stdout: "true"
Jan  4 13:47:13.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzsw7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:47:13.801: INFO: stderr: ""
Jan  4 13:47:13.802: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:47:13.802: INFO: validating pod update-demo-nautilus-rzsw7
Jan  4 13:47:13.829: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:47:13.829: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:47:13.829: INFO: update-demo-nautilus-rzsw7 is verified up and running
STEP: rolling-update to new replication controller
Jan  4 13:47:13.844: INFO: scanned /root for discovery docs: 
Jan  4 13:47:13.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:10.438: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  4 13:48:10.438: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:48:10.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:11.141: INFO: stderr: ""
Jan  4 13:48:11.141: INFO: stdout: "update-demo-kitten-7k64k update-demo-kitten-dtj9k update-demo-nautilus-779hp "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  4 13:48:16.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:16.368: INFO: stderr: ""
Jan  4 13:48:16.368: INFO: stdout: "update-demo-kitten-7k64k update-demo-kitten-dtj9k "
Jan  4 13:48:16.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7k64k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:16.460: INFO: stderr: ""
Jan  4 13:48:16.460: INFO: stdout: "true"
Jan  4 13:48:16.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7k64k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:16.601: INFO: stderr: ""
Jan  4 13:48:16.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 13:48:16.601: INFO: validating pod update-demo-kitten-7k64k
Jan  4 13:48:16.670: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 13:48:16.670: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 13:48:16.670: INFO: update-demo-kitten-7k64k is verified up and running
Jan  4 13:48:16.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dtj9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:16.770: INFO: stderr: ""
Jan  4 13:48:16.770: INFO: stdout: "true"
Jan  4 13:48:16.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dtj9k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2n2ph'
Jan  4 13:48:16.976: INFO: stderr: ""
Jan  4 13:48:16.976: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 13:48:16.976: INFO: validating pod update-demo-kitten-dtj9k
Jan  4 13:48:17.011: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 13:48:17.011: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 13:48:17.011: INFO: update-demo-kitten-dtj9k is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:48:17.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2n2ph" for this suite.
Jan  4 13:48:51.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:48:51.190: INFO: namespace: e2e-tests-kubectl-2n2ph, resource: bindings, ignored listing per whitelist
Jan  4 13:48:51.538: INFO: namespace e2e-tests-kubectl-2n2ph deletion completed in 34.505036805s

• [SLOW TEST:125.005 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:48:51.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f0fe0a3c-2ef8-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:48:51.933: INFO: Waiting up to 5m0s for pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006" in namespace "e2e-tests-secrets-kbwsn" to be "success or failure"
Jan  4 13:48:51.945: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.661836ms
Jan  4 13:48:54.491: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.557859491s
Jan  4 13:48:56.633: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.699614132s
Jan  4 13:48:59.297: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.363770273s
Jan  4 13:49:01.681: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.748554668s
Jan  4 13:49:03.703: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.770556857s
Jan  4 13:49:06.764: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 14.830658463s
Jan  4 13:49:08.842: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.908899471s
STEP: Saw pod success
Jan  4 13:49:08.842: INFO: Pod "pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:49:08.883: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan  4 13:49:09.330: INFO: Waiting for pod pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006 to disappear
Jan  4 13:49:09.511: INFO: Pod pod-secrets-f10b3870-2ef8-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:49:09.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kbwsn" for this suite.
Jan  4 13:49:17.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:49:17.809: INFO: namespace: e2e-tests-secrets-kbwsn, resource: bindings, ignored listing per whitelist
Jan  4 13:49:17.816: INFO: namespace e2e-tests-secrets-kbwsn deletion completed in 8.286579816s

• [SLOW TEST:26.277 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:49:17.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-00d8d9ab-2ef9-11ea-9996-0242ac110006
STEP: Creating a pod to test consume secrets
Jan  4 13:49:18.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-5xsqs" to be "success or failure"
Jan  4 13:49:18.486: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.230711ms
Jan  4 13:49:20.894: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422428638s
Jan  4 13:49:22.912: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440823667s
Jan  4 13:49:24.986: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.514707758s
Jan  4 13:49:28.115: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643597171s
Jan  4 13:49:30.765: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.293412625s
Jan  4 13:49:32.853: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.381592552s
Jan  4 13:49:34.878: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.407114251s
Jan  4 13:49:37.188: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.716510445s
Jan  4 13:49:39.213: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.742228821s
STEP: Saw pod success
Jan  4 13:49:39.214: INFO: Pod "pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:49:39.221: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 13:49:39.872: INFO: Waiting for pod pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:49:39.891: INFO: Pod pod-projected-secrets-00d9ebe1-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:49:39.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5xsqs" for this suite.
Jan  4 13:49:48.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:49:48.099: INFO: namespace: e2e-tests-projected-5xsqs, resource: bindings, ignored listing per whitelist
Jan  4 13:49:48.221: INFO: namespace e2e-tests-projected-5xsqs deletion completed in 8.312159384s

• [SLOW TEST:30.405 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:49:48.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:49:48.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-bd7jc" to be "success or failure"
Jan  4 13:49:48.926: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 213.497675ms
Jan  4 13:49:50.997: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284827652s
Jan  4 13:49:53.036: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323227822s
Jan  4 13:49:55.093: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38098849s
Jan  4 13:49:59.716: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.003914316s
Jan  4 13:50:01.783: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.070438425s
Jan  4 13:50:03.841: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.128225589s
Jan  4 13:50:05.888: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.176162029s
Jan  4 13:50:09.305: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.592761223s
Jan  4 13:50:11.323: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.610946507s
STEP: Saw pod success
Jan  4 13:50:11.323: INFO: Pod "downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:50:11.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:50:13.613: INFO: Waiting for pod downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:50:16.785: INFO: Pod downwardapi-volume-12df4405-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:50:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bd7jc" for this suite.
Jan  4 13:50:25.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:50:25.816: INFO: namespace: e2e-tests-downward-api-bd7jc, resource: bindings, ignored listing per whitelist
Jan  4 13:50:25.836: INFO: namespace e2e-tests-downward-api-bd7jc deletion completed in 9.021112844s

• [SLOW TEST:37.613 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:50:25.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  4 13:50:26.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mgnt5'
Jan  4 13:50:26.656: INFO: stderr: ""
Jan  4 13:50:26.656: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 13:50:28.931: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:28.931: INFO: Found 0 / 1
Jan  4 13:50:30.583: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:30.583: INFO: Found 0 / 1
Jan  4 13:50:30.929: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:30.929: INFO: Found 0 / 1
Jan  4 13:50:31.691: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:31.692: INFO: Found 0 / 1
Jan  4 13:50:32.684: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:32.684: INFO: Found 0 / 1
Jan  4 13:50:33.665: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:33.665: INFO: Found 0 / 1
Jan  4 13:50:35.712: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:35.712: INFO: Found 0 / 1
Jan  4 13:50:37.389: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:37.389: INFO: Found 0 / 1
Jan  4 13:50:37.908: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:37.909: INFO: Found 0 / 1
Jan  4 13:50:38.723: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:38.724: INFO: Found 0 / 1
Jan  4 13:50:39.695: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:39.695: INFO: Found 0 / 1
Jan  4 13:50:40.678: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:40.678: INFO: Found 0 / 1
Jan  4 13:50:41.933: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:41.934: INFO: Found 1 / 1
Jan  4 13:50:41.934: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  4 13:50:42.324: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:42.324: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 13:50:42.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-f4xl8 --namespace=e2e-tests-kubectl-mgnt5 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  4 13:50:42.983: INFO: stderr: ""
Jan  4 13:50:42.983: INFO: stdout: "pod/redis-master-f4xl8 patched\n"
STEP: checking annotations
Jan  4 13:50:43.053: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:50:43.054: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:50:43.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mgnt5" for this suite.
Jan  4 13:51:11.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:51:11.484: INFO: namespace: e2e-tests-kubectl-mgnt5, resource: bindings, ignored listing per whitelist
Jan  4 13:51:11.484: INFO: namespace e2e-tests-kubectl-mgnt5 deletion completed in 28.234381893s

• [SLOW TEST:45.647 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:51:11.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  4 13:51:11.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:12.635: INFO: stderr: ""
Jan  4 13:51:12.636: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:51:12.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:12.912: INFO: stderr: ""
Jan  4 13:51:12.912: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
Jan  4 13:51:12.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:13.205: INFO: stderr: ""
Jan  4 13:51:13.205: INFO: stdout: ""
Jan  4 13:51:13.205: INFO: update-demo-nautilus-6db72 is created but not running
Jan  4 13:51:18.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:18.641: INFO: stderr: ""
Jan  4 13:51:18.641: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
Jan  4 13:51:18.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:20.361: INFO: stderr: ""
Jan  4 13:51:20.361: INFO: stdout: ""
Jan  4 13:51:20.361: INFO: update-demo-nautilus-6db72 is created but not running
Jan  4 13:51:25.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:25.522: INFO: stderr: ""
Jan  4 13:51:25.522: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
Jan  4 13:51:25.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:26.636: INFO: stderr: ""
Jan  4 13:51:26.636: INFO: stdout: ""
Jan  4 13:51:26.636: INFO: update-demo-nautilus-6db72 is created but not running
Jan  4 13:51:31.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:33.000: INFO: stderr: ""
Jan  4 13:51:33.000: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
Jan  4 13:51:33.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:33.385: INFO: stderr: ""
Jan  4 13:51:33.385: INFO: stdout: ""
Jan  4 13:51:33.385: INFO: update-demo-nautilus-6db72 is created but not running
Jan  4 13:51:38.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:38.517: INFO: stderr: ""
Jan  4 13:51:38.517: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
Jan  4 13:51:38.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:38.633: INFO: stderr: ""
Jan  4 13:51:38.633: INFO: stdout: "true"
Jan  4 13:51:38.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6db72 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:38.753: INFO: stderr: ""
Jan  4 13:51:38.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:51:38.753: INFO: validating pod update-demo-nautilus-6db72
Jan  4 13:51:38.804: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:51:38.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:51:38.804: INFO: update-demo-nautilus-6db72 is verified up and running
Jan  4 13:51:38.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:38.943: INFO: stderr: ""
Jan  4 13:51:38.944: INFO: stdout: "true"
Jan  4 13:51:38.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:39.043: INFO: stderr: ""
Jan  4 13:51:39.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:51:39.043: INFO: validating pod update-demo-nautilus-v9vvz
Jan  4 13:51:39.079: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:51:39.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:51:39.079: INFO: update-demo-nautilus-v9vvz is verified up and running
STEP: scaling down the replication controller
Jan  4 13:51:39.110: INFO: scanned /root for discovery docs: 
Jan  4 13:51:39.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:40.961: INFO: stderr: ""
Jan  4 13:51:40.961: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:51:40.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:41.048: INFO: stderr: ""
Jan  4 13:51:41.048: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  4 13:51:46.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:46.192: INFO: stderr: ""
Jan  4 13:51:46.192: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  4 13:51:51.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:51.311: INFO: stderr: ""
Jan  4 13:51:51.312: INFO: stdout: "update-demo-nautilus-6db72 update-demo-nautilus-v9vvz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  4 13:51:56.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:56.535: INFO: stderr: ""
Jan  4 13:51:56.535: INFO: stdout: "update-demo-nautilus-v9vvz "
Jan  4 13:51:56.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:56.659: INFO: stderr: ""
Jan  4 13:51:56.659: INFO: stdout: "true"
Jan  4 13:51:56.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:56.841: INFO: stderr: ""
Jan  4 13:51:56.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:51:56.841: INFO: validating pod update-demo-nautilus-v9vvz
Jan  4 13:51:56.879: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:51:56.879: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:51:56.879: INFO: update-demo-nautilus-v9vvz is verified up and running
STEP: scaling up the replication controller
Jan  4 13:51:56.883: INFO: scanned /root for discovery docs: 
Jan  4 13:51:56.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:51:59.708: INFO: stderr: ""
Jan  4 13:51:59.708: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 13:51:59.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:00.237: INFO: stderr: ""
Jan  4 13:52:00.237: INFO: stdout: "update-demo-nautilus-m9zxc update-demo-nautilus-v9vvz "
Jan  4 13:52:00.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:00.649: INFO: stderr: ""
Jan  4 13:52:00.649: INFO: stdout: ""
Jan  4 13:52:00.649: INFO: update-demo-nautilus-m9zxc is created but not running
Jan  4 13:52:05.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:06.099: INFO: stderr: ""
Jan  4 13:52:06.099: INFO: stdout: "update-demo-nautilus-m9zxc update-demo-nautilus-v9vvz "
Jan  4 13:52:06.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:06.592: INFO: stderr: ""
Jan  4 13:52:06.592: INFO: stdout: ""
Jan  4 13:52:06.592: INFO: update-demo-nautilus-m9zxc is created but not running
Jan  4 13:52:11.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:11.789: INFO: stderr: ""
Jan  4 13:52:11.789: INFO: stdout: "update-demo-nautilus-m9zxc update-demo-nautilus-v9vvz "
Jan  4 13:52:11.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zxc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.005: INFO: stderr: ""
Jan  4 13:52:12.005: INFO: stdout: "true"
Jan  4 13:52:12.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9zxc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.140: INFO: stderr: ""
Jan  4 13:52:12.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:52:12.141: INFO: validating pod update-demo-nautilus-m9zxc
Jan  4 13:52:12.163: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:52:12.163: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:52:12.163: INFO: update-demo-nautilus-m9zxc is verified up and running
Jan  4 13:52:12.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.395: INFO: stderr: ""
Jan  4 13:52:12.396: INFO: stdout: "true"
Jan  4 13:52:12.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9vvz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.510: INFO: stderr: ""
Jan  4 13:52:12.510: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 13:52:12.510: INFO: validating pod update-demo-nautilus-v9vvz
Jan  4 13:52:12.543: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 13:52:12.544: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 13:52:12.544: INFO: update-demo-nautilus-v9vvz is verified up and running
STEP: using delete to clean up resources
Jan  4 13:52:12.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.668: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 13:52:12.668: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  4 13:52:12.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rpgzh'
Jan  4 13:52:12.889: INFO: stderr: "No resources found.\n"
Jan  4 13:52:12.889: INFO: stdout: ""
Jan  4 13:52:12.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-rpgzh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 13:52:14.735: INFO: stderr: ""
Jan  4 13:52:14.735: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:52:14.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rpgzh" for this suite.
Jan  4 13:52:40.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:52:40.831: INFO: namespace: e2e-tests-kubectl-rpgzh, resource: bindings, ignored listing per whitelist
Jan  4 13:52:40.850: INFO: namespace e2e-tests-kubectl-rpgzh deletion completed in 24.681080918s

• [SLOW TEST:89.366 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:52:40.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  4 13:52:41.747: INFO: Waiting up to 5m0s for pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-v6prx" to be "success or failure"
Jan  4 13:52:41.754: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548685ms
Jan  4 13:52:45.409: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661255512s
Jan  4 13:52:47.733: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.98553562s
Jan  4 13:52:49.749: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00189557s
Jan  4 13:52:51.912: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.164410197s
Jan  4 13:52:53.990: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.242429943s
Jan  4 13:52:56.348: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.60029821s
Jan  4 13:52:58.380: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.632554714s
STEP: Saw pod success
Jan  4 13:52:58.380: INFO: Pod "pod-79e1e49a-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:52:58.390: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-79e1e49a-2ef9-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:52:58.920: INFO: Waiting for pod pod-79e1e49a-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:52:58.936: INFO: Pod pod-79e1e49a-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:52:58.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-v6prx" for this suite.
Jan  4 13:53:06.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:53:07.142: INFO: namespace: e2e-tests-emptydir-v6prx, resource: bindings, ignored listing per whitelist
Jan  4 13:53:07.249: INFO: namespace e2e-tests-emptydir-v6prx deletion completed in 8.303408441s

• [SLOW TEST:26.398 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:53:07.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:53:07.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-downward-api-swhg6" to be "success or failure"
Jan  4 13:53:07.573: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 34.33078ms
Jan  4 13:53:09.657: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118864497s
Jan  4 13:53:11.673: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134479037s
Jan  4 13:53:13.817: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278906251s
Jan  4 13:53:15.846: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307883853s
Jan  4 13:53:17.876: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 10.337184616s
Jan  4 13:53:19.895: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.356921501s
STEP: Saw pod success
Jan  4 13:53:19.896: INFO: Pod "downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:53:19.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:53:20.624: INFO: Waiting for pod downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:53:20.646: INFO: Pod downwardapi-volume-8966d443-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:53:20.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-swhg6" for this suite.
Jan  4 13:53:27.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:53:27.434: INFO: namespace: e2e-tests-downward-api-swhg6, resource: bindings, ignored listing per whitelist
Jan  4 13:53:27.536: INFO: namespace e2e-tests-downward-api-swhg6 deletion completed in 6.867482166s

• [SLOW TEST:20.287 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:53:27.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  4 13:53:38.687: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-95e30868-2ef9-11ea-9996-0242ac110006,GenerateName:,Namespace:e2e-tests-events-f7fxk,SelfLink:/api/v1/namespaces/e2e-tests-events-f7fxk/pods/send-events-95e30868-2ef9-11ea-9996-0242ac110006,UID:95e5f630-2ef9-11ea-a994-fa163e34d433,ResourceVersion:17150075,Generation:0,CreationTimestamp:2020-01-04 13:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 464035655,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ssdxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ssdxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ssdxj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000db3fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017d2000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:53:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:53:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:53:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:53:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-04 13:53:28 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-04 13:53:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d6d23520f46716ab391134207adc5a3d7db2370a3cfefcfdbef09a3389c643f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  4 13:53:40.719: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  4 13:53:42.757: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:53:42.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-f7fxk" for this suite.
Jan  4 13:54:22.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:54:22.904: INFO: namespace: e2e-tests-events-f7fxk, resource: bindings, ignored listing per whitelist
Jan  4 13:54:23.179: INFO: namespace e2e-tests-events-f7fxk deletion completed in 40.366368573s

• [SLOW TEST:55.643 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:54:23.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:54:23.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-nxmfc" to be "success or failure"
Jan  4 13:54:23.445: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139527ms
Jan  4 13:54:25.910: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473004755s
Jan  4 13:54:27.923: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486050849s
Jan  4 13:54:29.952: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515429307s
Jan  4 13:54:31.983: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546380146s
Jan  4 13:54:34.002: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.56474268s
Jan  4 13:54:36.016: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.579375748s
STEP: Saw pod success
Jan  4 13:54:36.016: INFO: Pod "downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:54:36.023: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:54:36.867: INFO: Waiting for pod downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:54:36.881: INFO: Pod downwardapi-volume-b6a423b5-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:54:36.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nxmfc" for this suite.
Jan  4 13:54:45.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:54:45.623: INFO: namespace: e2e-tests-projected-nxmfc, resource: bindings, ignored listing per whitelist
Jan  4 13:54:45.823: INFO: namespace e2e-tests-projected-nxmfc deletion completed in 8.916718097s

• [SLOW TEST:22.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:54:45.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  4 13:54:45.970: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:55:05.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-s2bj5" for this suite.
Jan  4 13:55:14.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:55:14.121: INFO: namespace: e2e-tests-init-container-s2bj5, resource: bindings, ignored listing per whitelist
Jan  4 13:55:14.201: INFO: namespace e2e-tests-init-container-s2bj5 deletion completed in 8.53357111s

• [SLOW TEST:28.377 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:55:14.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  4 13:55:14.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-projected-bpwbq" to be "success or failure"
Jan  4 13:55:14.775: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.316545ms
Jan  4 13:55:17.606: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852589367s
Jan  4 13:55:19.972: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.219388327s
Jan  4 13:55:21.993: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.240359941s
Jan  4 13:55:24.522: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.769393348s
Jan  4 13:55:27.608: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.85545431s
Jan  4 13:55:33.149: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.396062697s
Jan  4 13:55:35.478: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.724626897s
Jan  4 13:55:37.506: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 22.753180641s
Jan  4 13:55:39.537: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 24.784209574s
Jan  4 13:55:42.302: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.54949814s
STEP: Saw pod success
Jan  4 13:55:42.303: INFO: Pod "downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:55:42.321: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006 container client-container: 
STEP: delete the pod
Jan  4 13:55:43.104: INFO: Waiting for pod downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:55:43.128: INFO: Pod downwardapi-volume-d538e533-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:55:43.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bpwbq" for this suite.
Jan  4 13:55:49.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:55:49.478: INFO: namespace: e2e-tests-projected-bpwbq, resource: bindings, ignored listing per whitelist
Jan  4 13:55:49.538: INFO: namespace e2e-tests-projected-bpwbq deletion completed in 6.397162572s

• [SLOW TEST:35.337 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:55:49.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  4 13:55:49.931: INFO: Waiting up to 5m0s for pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006" in namespace "e2e-tests-emptydir-qdn5s" to be "success or failure"
Jan  4 13:55:49.946: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.996351ms
Jan  4 13:55:51.971: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038880665s
Jan  4 13:55:53.990: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058536584s
Jan  4 13:55:56.194: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262500516s
Jan  4 13:55:58.205: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272989156s
Jan  4 13:56:00.352: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.420395972s
Jan  4 13:56:03.406: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.473946648s
STEP: Saw pod success
Jan  4 13:56:03.406: INFO: Pod "pod-ea2675fd-2ef9-11ea-9996-0242ac110006" satisfied condition "success or failure"
Jan  4 13:56:03.419: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ea2675fd-2ef9-11ea-9996-0242ac110006 container test-container: 
STEP: delete the pod
Jan  4 13:56:04.611: INFO: Waiting for pod pod-ea2675fd-2ef9-11ea-9996-0242ac110006 to disappear
Jan  4 13:56:04.734: INFO: Pod pod-ea2675fd-2ef9-11ea-9996-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:56:04.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qdn5s" for this suite.
Jan  4 13:56:10.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:56:10.938: INFO: namespace: e2e-tests-emptydir-qdn5s, resource: bindings, ignored listing per whitelist
Jan  4 13:56:10.938: INFO: namespace e2e-tests-emptydir-qdn5s deletion completed in 6.197078858s

• [SLOW TEST:21.399 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:56:10.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  4 13:56:11.157: INFO: namespace e2e-tests-kubectl-gkk72
Jan  4 13:56:11.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gkk72'
Jan  4 13:56:13.398: INFO: stderr: ""
Jan  4 13:56:13.398: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 13:56:15.546: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:15.546: INFO: Found 0 / 1
Jan  4 13:56:16.422: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:16.422: INFO: Found 0 / 1
Jan  4 13:56:17.415: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:17.415: INFO: Found 0 / 1
Jan  4 13:56:18.413: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:18.413: INFO: Found 0 / 1
Jan  4 13:56:19.469: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:19.469: INFO: Found 0 / 1
Jan  4 13:56:20.416: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:20.416: INFO: Found 0 / 1
Jan  4 13:56:21.439: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:21.439: INFO: Found 0 / 1
Jan  4 13:56:22.433: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:22.433: INFO: Found 0 / 1
Jan  4 13:56:23.418: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:23.419: INFO: Found 1 / 1
Jan  4 13:56:23.419: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  4 13:56:23.429: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 13:56:23.429: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 13:56:23.429: INFO: wait on redis-master startup in e2e-tests-kubectl-gkk72 
Jan  4 13:56:23.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7kvrc redis-master --namespace=e2e-tests-kubectl-gkk72'
Jan  4 13:56:23.674: INFO: stderr: ""
Jan  4 13:56:23.674: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Jan 13:56:21.865 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 13:56:21.866 # Server started, Redis version 3.2.12\n1:M 04 Jan 13:56:21.866 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 13:56:21.866 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  4 13:56:23.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-gkk72'
Jan  4 13:56:24.068: INFO: stderr: ""
Jan  4 13:56:24.068: INFO: stdout: "service/rm2 exposed\n"
Jan  4 13:56:24.087: INFO: Service rm2 in namespace e2e-tests-kubectl-gkk72 found.
STEP: exposing service
Jan  4 13:56:26.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-gkk72'
Jan  4 13:56:26.352: INFO: stderr: ""
Jan  4 13:56:26.352: INFO: stdout: "service/rm3 exposed\n"
Jan  4 13:56:26.367: INFO: Service rm3 in namespace e2e-tests-kubectl-gkk72 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:56:28.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gkk72" for this suite.
Jan  4 13:56:52.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:56:52.996: INFO: namespace: e2e-tests-kubectl-gkk72, resource: bindings, ignored listing per whitelist
Jan  4 13:56:53.024: INFO: namespace e2e-tests-kubectl-gkk72 deletion completed in 24.435588644s

• [SLOW TEST:42.085 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:56:53.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  4 13:57:12.088: INFO: Successfully updated pod "pod-update-10127500-2efa-11ea-9996-0242ac110006"
STEP: verifying the updated pod is in kubernetes
Jan  4 13:57:12.165: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:57:12.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cj9n5" for this suite.
Jan  4 13:57:40.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:57:40.759: INFO: namespace: e2e-tests-pods-cj9n5, resource: bindings, ignored listing per whitelist
Jan  4 13:57:41.054: INFO: namespace e2e-tests-pods-cj9n5 deletion completed in 28.822777829s

• [SLOW TEST:48.030 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:57:41.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:57:41.265: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  4 13:57:46.539: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  4 13:57:56.586: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  4 13:57:56.755: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6dwxm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6dwxm/deployments/test-cleanup-deployment,UID:35b919eb-2efa-11ea-a994-fa163e34d433,ResourceVersion:17150570,Generation:1,CreationTimestamp:2020-01-04 13:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  4 13:57:56.767: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:57:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6dwxm" for this suite.
Jan  4 13:58:15.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:58:15.380: INFO: namespace: e2e-tests-deployment-6dwxm, resource: bindings, ignored listing per whitelist
Jan  4 13:58:15.399: INFO: namespace e2e-tests-deployment-6dwxm deletion completed in 18.588943363s

• [SLOW TEST:34.344 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:58:15.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  4 13:58:15.719: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  4 13:58:16.076: INFO: Number of nodes with available pods: 0
Jan  4 13:58:16.077: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  4 13:58:16.287: INFO: Number of nodes with available pods: 0
Jan  4 13:58:16.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:17.303: INFO: Number of nodes with available pods: 0
Jan  4 13:58:17.303: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:19.028: INFO: Number of nodes with available pods: 0
Jan  4 13:58:19.028: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:19.585: INFO: Number of nodes with available pods: 0
Jan  4 13:58:19.585: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:20.624: INFO: Number of nodes with available pods: 0
Jan  4 13:58:20.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:21.473: INFO: Number of nodes with available pods: 0
Jan  4 13:58:21.473: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:22.325: INFO: Number of nodes with available pods: 0
Jan  4 13:58:22.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:23.310: INFO: Number of nodes with available pods: 0
Jan  4 13:58:23.310: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:25.773: INFO: Number of nodes with available pods: 0
Jan  4 13:58:25.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:28.307: INFO: Number of nodes with available pods: 0
Jan  4 13:58:28.307: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:29.418: INFO: Number of nodes with available pods: 0
Jan  4 13:58:29.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:31.869: INFO: Number of nodes with available pods: 0
Jan  4 13:58:31.869: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:32.339: INFO: Number of nodes with available pods: 0
Jan  4 13:58:32.339: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:33.309: INFO: Number of nodes with available pods: 0
Jan  4 13:58:33.309: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:34.325: INFO: Number of nodes with available pods: 0
Jan  4 13:58:34.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:35.305: INFO: Number of nodes with available pods: 1
Jan  4 13:58:35.305: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  4 13:58:35.376: INFO: Number of nodes with available pods: 1
Jan  4 13:58:35.376: INFO: Number of running nodes: 0, number of available pods: 1
Jan  4 13:58:36.400: INFO: Number of nodes with available pods: 0
Jan  4 13:58:36.400: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  4 13:58:36.733: INFO: Number of nodes with available pods: 0
Jan  4 13:58:36.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:37.786: INFO: Number of nodes with available pods: 0
Jan  4 13:58:37.786: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:39.407: INFO: Number of nodes with available pods: 0
Jan  4 13:58:39.407: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:40.292: INFO: Number of nodes with available pods: 0
Jan  4 13:58:40.292: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:40.796: INFO: Number of nodes with available pods: 0
Jan  4 13:58:40.797: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:41.746: INFO: Number of nodes with available pods: 0
Jan  4 13:58:41.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:43.311: INFO: Number of nodes with available pods: 0
Jan  4 13:58:43.311: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:44.181: INFO: Number of nodes with available pods: 0
Jan  4 13:58:44.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:44.778: INFO: Number of nodes with available pods: 0
Jan  4 13:58:44.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:45.746: INFO: Number of nodes with available pods: 0
Jan  4 13:58:45.746: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:46.776: INFO: Number of nodes with available pods: 0
Jan  4 13:58:46.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:47.984: INFO: Number of nodes with available pods: 0
Jan  4 13:58:47.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:48.743: INFO: Number of nodes with available pods: 0
Jan  4 13:58:48.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:50.099: INFO: Number of nodes with available pods: 0
Jan  4 13:58:50.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:50.756: INFO: Number of nodes with available pods: 0
Jan  4 13:58:50.756: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:52.788: INFO: Number of nodes with available pods: 0
Jan  4 13:58:52.788: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:53.780: INFO: Number of nodes with available pods: 0
Jan  4 13:58:53.780: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:54.749: INFO: Number of nodes with available pods: 0
Jan  4 13:58:54.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:55.803: INFO: Number of nodes with available pods: 0
Jan  4 13:58:55.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:58.886: INFO: Number of nodes with available pods: 0
Jan  4 13:58:58.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:58:59.753: INFO: Number of nodes with available pods: 0
Jan  4 13:58:59.753: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:59:01.529: INFO: Number of nodes with available pods: 0
Jan  4 13:59:01.529: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:59:01.763: INFO: Number of nodes with available pods: 0
Jan  4 13:59:01.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:59:02.754: INFO: Number of nodes with available pods: 0
Jan  4 13:59:02.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:59:03.759: INFO: Number of nodes with available pods: 0
Jan  4 13:59:03.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  4 13:59:04.751: INFO: Number of nodes with available pods: 1
Jan  4 13:59:04.751: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xgwm6, will wait for the garbage collector to delete the pods
Jan  4 13:59:04.875: INFO: Deleting DaemonSet.extensions daemon-set took: 47.500795ms
Jan  4 13:59:05.176: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.721036ms
Jan  4 13:59:13.581: INFO: Number of nodes with available pods: 0
Jan  4 13:59:13.581: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 13:59:13.594: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xgwm6/daemonsets","resourceVersion":"17150758"},"items":null}

Jan  4 13:59:13.647: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xgwm6/pods","resourceVersion":"17150758"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 13:59:13.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xgwm6" for this suite.
Jan  4 13:59:21.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 13:59:21.864: INFO: namespace: e2e-tests-daemonsets-xgwm6, resource: bindings, ignored listing per whitelist
Jan  4 13:59:21.973: INFO: namespace e2e-tests-daemonsets-xgwm6 deletion completed in 8.263625287s

• [SLOW TEST:66.574 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 13:59:21.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-t84bf
Jan  4 13:59:36.572: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-t84bf
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 13:59:36.580: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 14:03:36.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-t84bf" for this suite.
Jan  4 14:03:48.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:03:48.972: INFO: namespace: e2e-tests-container-probe-t84bf, resource: bindings, ignored listing per whitelist
Jan  4 14:03:48.983: INFO: namespace e2e-tests-container-probe-t84bf deletion completed in 11.966958418s

• [SLOW TEST:267.009 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 14:03:48.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:03:53.301: INFO: Pod name wrapped-volume-race-0a3ab480-2efb-11ea-9996-0242ac110006: Found 0 pods out of 5
Jan  4 14:03:58.333: INFO: Pod name wrapped-volume-race-0a3ab480-2efb-11ea-9996-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0a3ab480-2efb-11ea-9996-0242ac110006 in namespace e2e-tests-emptydir-wrapper-frb4j, will wait for the garbage collector to delete the pods
Jan  4 14:05:51.901: INFO: Deleting ReplicationController wrapped-volume-race-0a3ab480-2efb-11ea-9996-0242ac110006 took: 163.999412ms
Jan  4 14:05:52.402: INFO: Terminating ReplicationController wrapped-volume-race-0a3ab480-2efb-11ea-9996-0242ac110006 pods took: 500.666354ms
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:06:43.600: INFO: Pod name wrapped-volume-race-6fb512d2-2efb-11ea-9996-0242ac110006: Found 0 pods out of 5
Jan  4 14:06:48.669: INFO: Pod name wrapped-volume-race-6fb512d2-2efb-11ea-9996-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6fb512d2-2efb-11ea-9996-0242ac110006 in namespace e2e-tests-emptydir-wrapper-frb4j, will wait for the garbage collector to delete the pods
Jan  4 14:08:55.155: INFO: Deleting ReplicationController wrapped-volume-race-6fb512d2-2efb-11ea-9996-0242ac110006 took: 151.767314ms
Jan  4 14:08:55.656: INFO: Terminating ReplicationController wrapped-volume-race-6fb512d2-2efb-11ea-9996-0242ac110006 pods took: 500.603444ms
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:09:44.678: INFO: Pod name wrapped-volume-race-db90741b-2efb-11ea-9996-0242ac110006: Found 0 pods out of 5
Jan  4 14:09:49.769: INFO: Pod name wrapped-volume-race-db90741b-2efb-11ea-9996-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-db90741b-2efb-11ea-9996-0242ac110006 in namespace e2e-tests-emptydir-wrapper-frb4j, will wait for the garbage collector to delete the pods
Jan  4 14:11:34.264: INFO: Deleting ReplicationController wrapped-volume-race-db90741b-2efb-11ea-9996-0242ac110006 took: 52.218893ms
Jan  4 14:11:34.565: INFO: Terminating ReplicationController wrapped-volume-race-db90741b-2efb-11ea-9996-0242ac110006 pods took: 301.249629ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 14:12:38.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-frb4j" for this suite.
Jan  4 14:13:00.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:13:00.900: INFO: namespace: e2e-tests-emptydir-wrapper-frb4j, resource: bindings, ignored listing per whitelist
Jan  4 14:13:01.295: INFO: namespace e2e-tests-emptydir-wrapper-frb4j deletion completed in 22.703621352s

• [SLOW TEST:552.311 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 14:13:01.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 14:13:01.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-ql26l" for this suite.
Jan  4 14:13:09.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:13:10.002: INFO: namespace: e2e-tests-services-ql26l, resource: bindings, ignored listing per whitelist
Jan  4 14:13:10.033: INFO: namespace e2e-tests-services-ql26l deletion completed in 8.353549395s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:8.738 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 14:13:10.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-m74ft
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-m74ft
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-m74ft
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-m74ft
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-m74ft
Jan  4 14:13:33.308: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m74ft, name: ss-0, uid: 63a430cd-2efc-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan  4 14:13:33.484: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m74ft, name: ss-0, uid: 63a430cd-2efc-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  4 14:13:33.531: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-m74ft, name: ss-0, uid: 63a430cd-2efc-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  4 14:13:33.674: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-m74ft
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-m74ft
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-m74ft and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  4 14:13:57.490: INFO: Deleting all statefulset in ns e2e-tests-statefulset-m74ft
Jan  4 14:13:57.500: INFO: Scaling statefulset ss to 0
Jan  4 14:14:17.557: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 14:14:17.563: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 14:14:17.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-m74ft" for this suite.
Jan  4 14:14:25.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:14:25.735: INFO: namespace: e2e-tests-statefulset-m74ft, resource: bindings, ignored listing per whitelist
Jan  4 14:14:25.988: INFO: namespace e2e-tests-statefulset-m74ft deletion completed in 8.323738852s

• [SLOW TEST:75.954 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  4 14:14:25.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 14:14:26.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vjs89'
Jan  4 14:14:28.888: INFO: stderr: ""
Jan  4 14:14:28.888: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  4 14:14:43.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vjs89 -o json'
Jan  4 14:14:44.191: INFO: stderr: ""
Jan  4 14:14:44.191: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-04T14:14:28Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-vjs89\",\n        \"resourceVersion\": \"17152404\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-vjs89/pods/e2e-test-nginx-pod\",\n        \"uid\": \"851ffd2c-2efc-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vfq9s\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vfq9s\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vfq9s\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:14:29Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:14:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:14:40Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:14:28Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://bd9da9f2b0713d1a05fbb2a1122cacf0994122ddd8b3dffe7c19ae47caf723dc\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-04T14:14:39Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-04T14:14:29Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  4 14:14:44.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-vjs89'
Jan  4 14:14:44.569: INFO: stderr: ""
Jan  4 14:14:44.569: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  4 14:14:44.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vjs89'
Jan  4 14:15:02.711: INFO: stderr: ""
Jan  4 14:15:02.711: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  4 14:15:02.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vjs89" for this suite.
Jan  4 14:15:11.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:15:11.166: INFO: namespace: e2e-tests-kubectl-vjs89, resource: bindings, ignored listing per whitelist
Jan  4 14:15:11.252: INFO: namespace e2e-tests-kubectl-vjs89 deletion completed in 8.407643308s

• [SLOW TEST:45.264 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSJan  4 14:15:11.253: INFO: Running AfterSuite actions on all nodes
Jan  4 14:15:11.253: INFO: Running AfterSuite actions on node 1
Jan  4 14:15:11.253: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9890.359 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS